CIA, FBI push 'Facebook for spies'

When you see people at the office using such Internet sites as Facebook and MySpace, you might suspect those workers are slacking off.
A social-networking site for the world of spying officially launches for the U.S. intelligence community this month.

A social-networking site for the world of spying officially launches for the U.S. intelligence community this month.

But that's not the case at the CIA, the FBI and the National Security Agency, where bosses are encouraging their staff members to use a new social-networking site designed for the super-secret world of spying.

"It's every bit Facebook and YouTube for spies, but it's much, much more," said Michael Wertheimer, assistant deputy director of national intelligence for analysis.

The program is called A-Space, and it's a social-networking site for analysts within the 16 U.S. intelligence agencies.

Instead of posting thoughts about the new Avenged Sevenfold album or Jessica Alba movie, CIA analysts could use A-Space to share information and opinion about al Qaeda movements in the Middle East or Russian naval maneuvers in the Black Sea.

The new A-Space site has been undergoing testing for months and launches officially for the nation's entire intelligence community September 22.

Multibillion-dollar collider to probe nature's mysteries

Deep underground on the border between France and Switzerland, the world's largest particle accelerator complex will explore the world on smaller scales than any human invention has explored before.
The collider's ALICE experiment will look at how the universe formed by analyzing particle collisions.

The collider's ALICE experiment will look at how the universe formed by analyzing particle collisions.

The Large Hadron Collider will look at how the universe formed by analyzing particle collisions. Some have expressed fears that the project could lead to the Earth's demise -- something scientists say will not happen. Still, skeptics have filed suit to try to stop the project.

It even has a rap dedicated to it on YouTube.

Scientists say the collider is finally ready for an attempt to circulate a beam of protons the whole way around the 17-mile tunnel. The test, which takes place Wednesday, is a major step toward seeing if the the immense experiment will provide new information about the way the universe works.

"It's really a generation that we've been looking forward to this moment, and the moments that will come after it in particular," said Bob Cousins, deputy to the scientific leader of the Compact Muon Solenoid experiment, one of six experiments inside the collider complex. "September 10 is a demarcation between finishing the construction and starting to turn it on, but the excitement will only continue to grow."

The collider consists of a particle accelerator buried more than 300 feet near Geneva, Switzerland. About $10 billion have gone into the accelerator's construction, the particle detectors and the computers, said Katie Yurkewicz, spokewoman for CERN, the European Organization for Nuclear Research, which is host to the collider.

In the coming months, the collider is expected to begin smashing particles into each other by sending two beams of protons around the tunnel in opposite directions. It will operate at higher energies and intensities in the next year, and the experiments could generate enough data to make a discovery by 2009, experts say. Check out the collider complex's six detectors »

Testing the unknown

Experts say the collider has the potential to confirm theories about questions that physicists have been working on for decades including the possible existence of extra dimensions. They also hope to find a theoretical particle called the Higgs boson, which has never been detected, but would help explain why matter has mass.

Large Hadron Collider down for 2 months

GENEVA, Switzerland (AP) -- The world's largest atom smasher -- which was launched with great fanfare earlier this month -- is more badly damaged than previously thought and will be out of commission for at least two months, its operators said Saturday.

Scientists at the Large Hadron Collider look at computer screens as the collider starts operating on September 10.

Scientists at the Large Hadron Collider look at computer screens as the collider starts operating on September 10.

Experts have gone into the 17-mile (27-kilometer) circular tunnel housing the Large Hadron Collider under the Swiss-French border to examine the damage that halted operations about 36 hours after its September 10 startup, said James Gillies, spokesman for CERN, the European Organization for Nuclear Research.

"It's too early to say precisely what happened, but it seems to be a faulty electrical connection between two magnets that stopped superconducting, melted and led to a mechanical failure and let the helium out," Gillies told The Associated Press.

Gillies said the sector that was damaged will have to be warmed up well above the absolute zero temperature used for operations so that repairs can be made -- a time-consuming process.

"A number of magnets raised their temperature by around 100 degrees," Gillies said. "We have now to warm up the whole sector in a controlled manner before we can actually go in and repair it."

The $10 billion particle collider, in the design and construction stages for more than two decades, is the world's largest atom smasher. It fires beams of protons from the nuclei of atoms around the tunnels at nearly the speed of light. See how collider could answer questions about nature of the universe »

It then causes the protons to collide, revealing how the tiniest particles were first created after the "big bang," which many theorize was the massive explosion that formed the stars, planets and everything else.

Gillies said such failures occur frequently in particle accelerators, but it was made more complicated in this case because the Large Hadron Collider operates at near absolute zero, colder than outer space, for maximum efficiency.

"When they happen in our other accelerators, it's a matter of a couple of days to fix them," Gillies said. "But because this is a superconducting machine and you've got long warmup and cool-down periods, it means we're going to be off for a couple of months."

He said it would take "several weeks minimum" to warm up the sector.

"Then we can fix it," Gillies said. "Then we cool it down again."

CERN announced Thursday that it had shut down the collider a week ago after a successful startup that had beams of protons circling in both clockwise and counterclockwise directions in the collider.

It was at first thought the failure of an electrical transformer that handles part of the cooling was the problem, CERN said. That transformer was replaced last weekend and the machine was lowered back to operating temperature to prepare for a resumption of operations.

But then more inspections were needed and it was determined that the problem was worse than initially thought, said Gillies.

The CERN experiments with the particle collider hope to reveal more about "dark matter," antimatter and possibly hidden dimensions of space and time. They could also find evidence of a hypothetical particle -- the Higgs boson -- which is sometimes called the "God particle" because it is believed to give mass to all other particles, and thus to matter that makes up the universe.

Smaller colliders have been used for decades to study the makeup of the atom. Scientists once thought protons and neutrons were the smallest components of an atom's nucleus, but experiments have shown that protons and neutrons are made of quarks and gluons and that there are other forces and particles.

Data Deduplication Addresses Key Storage Concerns

That data deduplication is one of the hottest storage-related technologies should come as no surprise. These systems promise streamlined storage, reduced costs and improved performance.

Given the explosion of information and the desire to provide more cost-effective storage, what IT executive wouldn’t want to consider deduplication?

Deduplication technology identifies variable-length blocks of data across various files and file types. It then stores unique blocks once, replacing redundant blocks with “data pointers.” When an incoming data block is a duplicate of something that’s already stored, the block isn’t stored again. If the block is unique, it’s stored on disk. With data deduplication, organizations can store a lot more data in a smaller amount of physical storage space.

An Enterprise Strategy Group (ESG) report published in January, 2008, found that 64 percent of deduplication users have experienced a 10:1 or greater capacity reduction ratio.

Not surprisingly, a fall 2007 ESG survey regarding data protection indicates that use of data deduplication products is expected to increase significantly. More than one-third of the respondents said they plan to use file-level data deduplication in the future, and 25 percent expect to add sub-file deduplication capabilities.

The increasing reliance on disk storage and the growing volume of data that needs to be protected are two factors that make deduplication so enticing, says Lauren Whitehouse, an analyst at ESG. “The economics of applying disk in the backup process further improve with deduplication,” she says. “By implementing data deduplication technology that identifies and eliminates data redundancy, the amount of data that is transferred and stored is reduced.”

But it’s not as simple as selecting a system and letting the technology do its thing. “The biggest challenge for many organizations is understanding their requirements and selecting a vendor/solution that meets its needs,” Whitehouse says. “There are a lot of vendors and approaches to wade through, which also means there are many choices to find an optimal solution.”

Whitehouse says one of the key concerns is whether to go with a software or hardware approach. Typically, backup software performs deduplication at the source, identifying duplicates before data is transferred across a network. Among the benefits are integration with the backup software; more intelligence about the data set; less data transferred across the network, which is especially important in server virtualization environments where there’s a lot of redundant data; the ability to use any type of disk; and fewer issues with scalability. The drawback is that not all backup vendors offer deduplication.

Deduplication is also a feature of many storage systems and hardware appliances. The benefits are that the deduplication hardware is optimized for the process, works with any backup software and can be implemented quickly, Whitehouse says. One drawback is that not all solutions scale, causing issues when capacity thresholds are hit.

Some products perform deduplication at the file level, others at a block or byte level. Whitehouse says the differences between the approaches have to do with computational time, accuracy, level of duplication detected, index size and scalability. File-level deduplication checks file attributes and eliminates redundant copies of files stored on backup media. Although this method delivers less capacity reduction than other methods, it’s simple and fast.

The general rule, Whitehouse says, is that the more granular the segment being inspected, the more redundancy that can be detected. Also, the smaller the segment being inspected, the more segments that need to be examined and compared, which could take longer.

Another consideration is how deduplication is applied in an environment. “If there are multiple sources being backed up, does [deduplication] occur across them, in addition to within them?” Whitehouse says. “This could occur in cases such as multiple remote and branch-office backup consolidation, or multiple systems replicating to a [disaster recovery] site.”

One of the biggest risks in selecting a deduplication solution is long-term viability. The most popular deduplication approach today is to install a disk-to-disk hardware system. But these solutions don't scale, which means that when a company hits a capacity threshold, it must upgrade the solution or add another system. Also, with multiple appliances in one environment that can’t be centrally managed, there are new management challenges.

Determining which approach and vendor is best depends on specific needs and a company’s tolerance for drawbacks. Deduplication requirements for remote and branch office consolidation might be different from those for data center backup. “Organizations need to map priorities and requirements and vet solutions based on these criteria,” Whitehouse adds.

VoIP, Wi-Fi and the Path to Fixed Mobile Convergence

Vo-Fi is making its way into the enterprise, putting the holy grail of mobility within reach. This report will prepare you for the obstacles will encounter.

VoIP, Wi-Fi and the Path to Fixed Mobile Convergence

Voice over IP on Wi-Fi has been promoted by some vendors and analysts as a killer application, but the adoption of this technology has fallen short of expectations, with significant business and technological obstacles impeding it. In certain vertical markets, including health care and retail, Vo-Fi has delivered healthy return on investment, but the technology has been relatively slow to gain traction in the carpeted enterprise. However, the recent emergence of fixed-mobile convergence solutions based on dual-mode phones capable of operating over both WLANs and cellular networks is breathing new life into this market. As the path to a new age of mobile data and voice services is carved out, this area will become increasingly critical to IT. This report provides a comprehensive assessment of Vo-Fi implementation in the enterprise, drawing from the author's extensive experience with mobility issues, as well as research conducted among technology professionals.

IT department to open 300 IT Training centers


KARACHI: Sindh Information Technology Department has proposed the opening of 300 more training centers for promotion of Information Technology in the Province.

This information was given by Sindh Minister for IT, Raza Haroon, while replying a supplementary question during question hour in Sindh Assembly.

He said the department has planned to introduce syllabus for the proposed 300 centers. He pointed out that existing centers are merely IT awareness centers and, therefore, no formal certificate is awarded to trainees in these centers.

Raza Haroon informed that IT department has established 40 IT Awareness Centers in 23 districts of Sindh including 11 in Karachi, 4 in Hyderabad, two each in Larkana, Mirpurkhas, Sukkur and Khairpur and one each in other districts.

He elaborated that admission is open to all interested people students or otherwise. He pointed out that IT Training Institutes Awarding Certificate/Diploma courses are controlled by Technical Board of Education, which determine admission, fee structure, syllabus and other terms and conditions.

The Minister said Information Technology department Sindh was executing a project called Call Center Training for jobless graduates/undergraduates. He said the project envisages training 1000 jobless people during the project period. So far 395 candidates have completed their Call Center Training and remaining 605 will be trained in the next financial year.

Replying a question of Nadeem Ahmed Bhutto, the provincial industries Minister Rauf Siddiqi said that over Rs 20 million were paid in advance to Estate Engineer SITE, during the period from July to September 2006.

He said as per rule the measurement book is not required in case of departmental works and in such cases departmental works payments are made directly to contractor as per cash bills.

The Minister agreed to the questioner that it is essential to make inquiry into the matter and if involvement of concerned officials is proved, action will be taken.

Need Faster Applications and More Bandwidth? Consider WAN Acceleration

Scalable network optimization technology can help organizations effectively speed access to information.
Many organizations operate wide area networks to provide access to applications and data. But sometimes these WANs are not capable of providing access at speeds that users want or need. That can result in problems such as decreased productivity and customer service degradation.

To deal with this challenge, a growing number of businesses are turning to WAN acceleration technologies. These products are designed to speed up applications and protocols running over a WAN. They use techniques such as low-level compression and protocol optimization to provide acceleration.

With WAN acceleration, organizations can not only boost the speed of end-user access to applications, but also get the most out of their existing network bandwidth and improve disaster recovery operations, among other benefits.

“The market is definitely still growing and we haven’t seen a slowdown in end-user demand for WAN acceleration,” says Robert Whiteley, principal analyst, Network Performance and Security, at Forrester Research. “ In fact, as IT [managers] look to trim the fat from budgets, they often turn to consolidation projects, which in turn increase the need for WAN optimization to accelerate connections to centralized data center resources.”

Whiteley says many of the benefits of WAN optimization are hard to quantify, but include increased productivity, improved performance of revenue-generating applications, and streamlined disaster recovery via accelerated remote backups and data replication.

“Most companies list these as key drivers, but the business case rests on good old bandwidth savings,” Whiteley says. For example, WAN optimization can decrease link utilization from 80 percent to 40 percent through the use of caching and compression. “This often prolongs a major WAN bandwidth upgrade,” he says. “We’ve seen clients with highly latent international links see a return on investment [in] as short as three months. If you have a regional or nationwide network, you can expect a tamer 12- to 18-month ROI.”

The primary challenges of deploying WAN acceleration have to do with lack of due diligence in technology selection, Whiteley says. “Companies often fall in love with acceleration benefits, but when they go to scale the deployment [either adding more traffic or more sites], they hit barriers,” he says.

It’s critical that enterprises select WAN acceleration with proper scalability, including support of the right throughput, number of TCP sessions, disk capacity and number of sites, Whiteley says. They also need to make sure the technology is reliable, in terms of hardware and software modularity, redundancy and failover. In addition, the technology must be transparent to users. That includes deploying WAN acceleration with the appropriate layer in the network topology.

“Typically, we see that companies will still realize the key benefits if they don’t take these challenges into account, but it can often hamstring the potential gains,” Whiteley says. “Also, in some cases, companies will spend extra cycles on the IT operations side to address an issue.”

When it comes to enterprise networks, organizations can never have too much security. If the information moving across these networks is compromised,

Building a business case for green computing initiatives keeps key stakeholder interests top of mind.
Embarking on an effort to make IT infrastructures more energy efficient has clear potential benefits for the environment and can help organizations become better corporate citizens. It can also make a lot of economic sense.

To evaluate the true economic benefits of green IT, technology executives need to examine costs as well as savings, and work with people in finance to develop realistic goals.

“There are many ways that organizations can save money through ‘green’ IT efforts, but energy savings seem to prevail as a recurring factor in most green IT initiatives,” says Ruben Melendez, CEO and executive analyst at Glomark-Governan. “When evaluating green IT investments, most IT organizations look to both improve operating efficiencies and reduce costs.”

Common green IT initiatives that fit both of these goals include data center consolidation, desktop virtualization, server consolidation and the use of multifunctional devices.

When looking at the financial side of these green IT investments, there is no one formula that applies to all investments in every organization, Melendez says. To effectively evaluate an investment in a green IT initiative, and forecast the resulting economic and financial impact on both the IT organization and the business, organizations need to create an objective business case, he says.

“Situations may exist where a solution may reduce IT energy costs and consolidate operations within the group, but negatively [affect] the business,” Melendez says. “Without exposing all the associated benefits, costs and risks of both investing and not investing in a green IT solution, executive buy-in will be limited at best.”

The costs of going green don’t start and stop with upfront cash investments and ongoing maintenance costs. “Green initiatives, depending on the adaptability of the company, may require significant change management costs,” Melendez says. For example, investing in a new data center consolidation solution might require re-training staff in new IT management processes, or new staff might need to be added.

Melendez says IT executives should work directly with finance throughout the entire proposed IT investment process. “When building a business case for a green IT solution, IT staff should consult with the finance staff to obtain realistic actual and [predicted] values to assess the benefits, costs and risks of investing and not investing in the solution,” he says. With more accurate financial data, the case for purchasing or not purchasing a solution will generate more objective forecasts.

“Over and above the economic impacts and benefits of being a socially responsible corporate citizen, companies should always consider what is best for their stakeholders,” Melendez says. “For private companies, will a green IT solution bring significant cost reductions or improve operating efficiencies? Or, are decision makers predominantly influenced by a need to have a ‘green’ footprint in the community?”

For publicly traded companies, “shareholders’ interest remains the most important factor in any investment,” he says. “Adding yet another bullet point to the corporate social responsibility resume will likely yield positive reviews from shareholders. However, if the return on the green IT investment does not help to move share prices upward, the social responsibility benefits become a moot point.”

Bolstered Security

Network behavior analysis keeps a close watch on traffic flow.

When it comes to enterprise networks, organizations can never have too much security. If the information moving across these networks is compromised, stolen, damaged or misused, the results could range from lost revenue to regulatory fines to public outrage — depending on what type of information is involved.

To really gain insight into how secure their data networks are, organizations need to know what’s actually happening on the networks. Network behavior analysis (NBA) systems are designed to help organizations gain greater visibility into network activity so they can more easily detect anomalies that might indicate malicious or suspicious actions.

NBA systems work by analyzing network traffic patterns through data gathered from network devices such as IP traffic flow systems or via packet analysis. They alert managers whenever there’s any type of suspicious activity, and enable managers to analyze and respond to such activity before any major harm is done to data or systems.

There has been s teady growth of interest in NBA technology, but it remains a small market, says Lawrence Orans, research director at Gartner. “We don’t anticipate a ‘hockey stick’ curve in NBA interest any time soon,” Orans says. “Overall, the demand is driven by a need for more visibility in the network.”

According to Gartner, NBA can be used to detect network behavior that might not be detected by other security technologies such as firewalls, intrusion prevention software, and security information and event management (SIEM) systems. Gartner says those technologies might not identify certain threats unless they are specifically configured to look for them.

Gartner research recommends that organizations should implement firewalls and intrusion detection/intrusion prevention (IDS/IPS) systems before investing in NBA systems.

The potential benefits of NBA come in two primary areas: security and network operations, Orans says. The security benefits include monitoring networks for malware. NBA detects unauthorized reconnaissance scanning by attackers looking for prospective targets. The systems can also detect infected devices that are spreading worm traffic through a network, unauthorized applications and rogue Web servers. They can monitor guest access to the network and generate audit-trail reports.

Operations benefits include improved network troubleshooting, Orans says. NBA can help administrators reduce the time they need to resolve network problems. The products also help identify real threats versus network performance issues, and can detect bandwidth-consuming downloads that can affect performance.

One of the biggest challenges of using NBA systems is the possibility of getting false positives, which can result in administrators spending lots of time chasing down alerts that turn out to be nothing problematic. One way to help minimize the false positives is to effectively configure and fine-tune the systems before putting them into production on the network.

Orans says there is a common misconception that NBA systems can enable automated response capabilities to contain attacks and protect against threats. In reality, he says, most administrators are reluctant to enable automated responses because of the high potential for false positives

Using IT to Help Manage the Supply Chain

SCM software can increase efficiencies — and if you're not careful, it can also add complexity.
Supply chains are more intricate than ever — especially for organizations that operate globally and with many suppliers, distributors and customers. Keeping track of materials demand and sourcing, product demand and inventory, the location of goods, financial transactions and other factors can be daunting.

Supply chain management (SCM) software can help enterprises automate the planning and execution of supply chain activities, and coordinate the movement of materials, goods and finances. This can result in significant efficiencies and process improvements.

But deploying SCM technology also comes with challenges: selecting the right technology to avoid complexity pitfalls; being realistic about implementation planning and execution; and the need to tweak applications as conditions change over time. Failure to address these could result in higher costs, greater complexity and fewer benefits from SCM.

The market for SCM applications is growing. Worldwide SCM software revenue totaled $6 billion in 2007, up 18 percent from $5.1 billion in 2006, according to a report released in June by research firm Gartner Inc.

Globalization is a primary driver of the growth, through the need for businesses to accelerate time to market for new products, services and geographies, says Chad Eschinger, research director, Software Market Research Team, at Gartner.

Through 2012, several factors will affect SCM software revenue growth, Eschinger says. These include increased global competition and the need for enhanced customer service , which will force many businesses to explore means to achieve greater value within their supply chains.

“Given the economic climate, with credit issues and skyrocketing energy costs combined with increasing SCM complexity, risk and globalization, there is a lot of pressure on [organizations] to deliver basic results,” says Dwight Klappich, v ice president covering supply chain execution applications at Gartner.

For example, Klappich says, over the past two years logistics costs have risen, whereas during the previous decade logistics costs were consistently declining. This is forcing companies to improve basic operating metrics.

Most organizations with a sizable SCM operation now realize that SCM software is critical to their business, Klappich says. “Not all organizations see it as a source of strategic differentiation, but they do still see it as important,” he says. “Many organizations now see that they can either positively affect their competitiveness with SCM investments or lose ground to competitors if they don’t.”

But SCM comes with a set of challenges and risks that organizations must address. One issue is the need to select the right technology, which doesn’t always mean the most powerful system available.

Klappich says he created a model to help clients make this determination. “The model looks at things like user freight spending, complexity, sophistication, process maturity, etc., to help determine an organization’s needs,” he says. “It is not uncommon to find that while [the organization] might need a robust solution, given spending and complexity, [it is] not ready for advanced systems, [due to] lack of sophistication and process maturity.”

Second, organizations need to be realistic about implementation. “Vendors will often present very optimistic plans that are unrealistic, but just as often the users are pushing so hard they develop their own unrealistic plan,” Klappich says. For example, don’t underestimate the time, effort and cost for the upfront modeling phase.

Finally, for many SCM applications, continuing improvement demands that an organization continually adapt the applications to changing business conditions. “This process is continual, yet many organizations do not plan or staff for this, especially if they use third parties for implementation,” Klappich says. “The right thing to do is to not think ‘project’ but ‘long-term process and governance.’ ”

SOA: Convergence and Consolidation Tech Report

As the service-oriented architecture industry mashes itself up, we look at which types of intermediary products are necessary and what can be brought inside the network infrastructure.

The SOA intermediary market is undergoing rapid consolidation as startups seek to expand beyond their niches and larger players offer suites that claim to cover all of an enterprise's service integration needs. In this report, we examine the current state of the four main SOA intermediary product categories, examining how they overlap and how the vendors within each category plan to expand:

ESB (ENTERPRISE SERVICE BUS)
The ESB is rapidly being commoditized, thanks to open-source options and its incorporation within other products. It is already a standard offering from service platform vendors and likely to become one within BPM (business process management) suites. To address this, ESB vendors are moving higher up the stack to BPM, CEP (complex event processing) or RIAs (rich Internet applications, the main technology in Web 2.0).

DESIGN-TIME GOVERNANCE
This is the least mature product category within SOA, mostly because it is needed only in relatively large deployments. It is a likely target for expansion from ESB and Web services management vendors looking to add value, though it may splinter as each targets a different part: ESBs come from the application development world and so are more likely to expand into the repository, while management platforms are more likely to offer the registry.

RUN-TIME MANAGEMENT
Web 2.0 has given this category a new lease on life, as a management platform can take the place of an ESB in Web services installations that are primarily designed for point-to-point connectivity. Many of these may evolve into more complete SOAs that require an ESB, but there will remain a place for standalone management tools.

XML SECURITY GATEWAY
Specialist XML security gateways are disappearing, but XML firewall functionality is more critical than ever. The security vendors themselves are moving into software and Web services management. Hardware XML firewalls will increasingly move inside the network infrastructure, supplied by AFE (application front end) vendors that can also provide hardware acceleration of other SOA functions.

Report Table of Contents:

Executive Summary
ESB (Enterprise Service Bus)
Design-Time Governance
Run-time Management
XML Security Gateway

SOA Product Categories
How It All Fits Together
The Enterprise Service Bus
Design-Time Governance
Security Gateways
Adjacent Product Categories

Vendor Analysis
AmberPoint
BEA Systems
Cape Clear
Cisco Systems
Fiorano
IBM
IONA
Layer 7 Technologies
LogiLibrary
Oracle
Progress Software
Software AG
Sun Microsystems
TIBCO
Vitria
Vordel
Xtradyne Technologies

SOA Vendors and Product Categories


Sales price: $499
Now available FREE, courtesy of IBM.

Considering Multisourcing? Make Sure You're Ready

Using more than one service provider can pay off — or add costs.
Whether to outsource IT functions and which service providers to use are among the more important decisions CIOs and other senior executives face. The right choices can result in cost savings, service improvements and other gains. The wrong ones can lead to runaway costs, a decline in service levels and unhappy customers and employees.

More often than in the past, organizations are employing multiple service providers to handle different functions, rather than hiring one outsourcing company to provide multiple services. This strategy can pay off nicely. But it can also create headaches for IT and business managers.

Clearly there’s a move toward using multiple service providers rather than awarding a huge outsourcing project to a single provider .

Gartner Inc., in a report published in June, says the number of awarded “megadeals” — which the firm characterizes as being worth more than $1 billion — declined from 12 in 2006 to 10 in 2007.

“Multisourcing is more than a trend; it’s here,” says Kurt Potter, research director, Outsourcing & IT Services, at Gartner . The firm’s research shows that 57 percent of organizations worldwide use two or more IT outsourcers or external service providers (ESPs) and 55 percent of organizations plan to increase the number of ESPs in the next year, Potter says. Only 4 percent of organizations plan to reduce or decrease the number of ESPs.

What are the advantages of going with multiple providers? “Organizations move to multisourcing or best of breed sourcing strategies to create a competitive environment between providers in hopes of increasing service quality, putting pressure on providers to keep costs low, and to get better services than would be the case when one provider delivers all services,” Potter says.

Also, as services become more mature and commodity-like in some instances, the standards that come along with this market maturity make it easier to combine the solutions and services of different providers, Potter says.

But the multisource strategy can have its problems. “Many organizations discover that they are not organized [for] nor have the governance to successfully manage multiple providers,” Potter says. “We see this often when leadership or staff who were good at service delivery and service quality now have to move to a softer-skill approach to managing providers.”

Without the right governance, organizations will have problems with the handoff points between providers, and may lose track of which provider is responsible for certain end-to-end service levels, Potter says. Also, it costs more to manage multiple providers; 1.5 percent to 5 percent of internal IT budgets are devoted to managing IT outsourcing, he says. Heavy multisourcing will increase this to 10 percent.

“Multisourcing as a phenomenon is nothing new,” Potter says. “Organizations worldwide have used more than one supplier or service provider to meet the demands of their organizations. What has changed is that with outsourcing, few organizations have the right expectations or experience to manage multiple service providers, who are de facto integrators of many sub-IT services.”

The addition of more service providers means more handoff points between competing service providers, which creates a governance nightmare when organizations are trying to ensure that problems are corrected and service levels are met, Potter says. When IT outsourcers come into conflict, organizations must ensure that all affected service providers are charged with correcting the problems for the benefit of their ultimate and shared client.

Operating-level agreements (OLAs) are one mechanism organizations should use to set the ground rules where provider responsibilities overlap. “Some organizations will enter outsourcing without the necessary skills to manage multiple providers and should evaluate whether they should outsource or contract with one provider to manage the other providers who deliver IT services,” Potter says.

BlackBerry Updates Sync With Microsoft, IBM Software

New software for Enterprise Server also gives administrators better monitoring, alerting, troubleshooting, and reporting capabilities.
Research In Motion on Tuesday rolled out several new updates to its BlackBerry platform, including improved messaging, enhanced security, and simplified device management.

The BlackBerry Enterprise Solution consists of tools that allow business professionals to access communications and information wirelessly, including messaging and collaboration (e-mail, instant-messaging, calendars, address books, and tasks), enterprise data, and personal productivity tools. A key element of the RIM's enterprise platform is the BlackBerry Enterprise Server, which enables device management and "push" wireless e-mail on BlackBerrys.

The new updates to the BlackBerry Enterprise Server and BlackBerry device software are meant to improve messaging and productivity tools on BlackBerry smartphones. For example, one of the updates is the ability to download and edit Microsoft Office Word, PowerPoint, and Excel documents on the smartphones.

As part of its messaging update, RIM introduced remote e-mail search that lets users retrieve e-mails from servers even if they're no longer stored on BlackBerrys. BlackBerry users also can now check the availability of their colleagues before requesting a meeting, and users running IBM Lotus Sametime and Microsoft Live Communications Server now get improved address book integration, according to RIM.

One other new feature is HTML and rich text e-mail rendering, which means e-mail messages can now be viewed on BlackBerrys in HTML and their original formatting, whether they were sent with embedded images, hyperlinks, tables, or bullet points.

Microsoft began offering a similar capability to view e-mails in their original HTML formatting with the release of its Windows Mobile 6 operating system last year.

RIM said it also added a new software component of the BlackBerry Enterprise Server that will provide administrators with better monitoring, alerting, troubleshooting, and reporting capabilities. The BlackBerry Monitoring Service will be included in the next release of the BlackBerry Enterprise Server, the company said.

Additionally, BlackBerry users can download RIM's free software called BlackBerry Web Desktop Manager, which allows software to be installed on BlackBerry smartphones from any desktop Web browser.

With the new features also comes enhanced security, such as the ability to view attachments within encrypted PGP and S/MIME messages, administrator control over GPS functionality on BlackBerrys, and administrator control of Bluetooth profiles.

These updates and others are being previewed by RIM at the IBM Lotusphere conference in Orlando this week. They will become available in software releases during the first half of this year.

Going Green? Don't Forget Software

CIOs shouldn't limit their focus to server and storage hardware when launching energy-efficiency programs.
When people think of “green” IT efforts, chances are they consider areas such as server consolidation, energy-efficient storage and other hardware-related initiatives. After all, servers and storage systems are consuming much of the power in data centers.

But software can also play a significant role in running more environmentally friendly technology infrastructures. CIOs who fail to look at how software can help reduce energy consumption are missing out on good opportunities to make their organizations greener.

There are three distinct areas to consider when it comes to green software efforts, says Richard Hodges, founder and CEO of GreenIT, a consulting firm that specializes on the emerging field of environmentally sustainable IT and communications systems.

First, software “needs to drive hardware decisions [that] create the eco-footprint of IT,” Hodges says. “The more efficient your applications and system software is, the less hardware is needed to run it. Less hardware means less power, less cooling, less material used and less electronic waste.”

The second area encompasses software tools that can be used for measuring and managing the eco-footprint of IT. For example, Hodges says, software is available to help organizations figure out what hardware and software they actually have, and what they can do with it. Specialized applications can be used for automatically managing desktop power consumption. “There are numerous software tools available for data center power monitoring and management,” he says.

The third area is comprised of software tools that support information and communication technology-driven innovation. For example, Hodges says, corporate social responsibility software automates reporting and supplants basic record-keeping tools such as spreadsheets. Other software, such as dashboards and enterprise sustainability reporting, will become an important new product area for software and will help drive the realization of sustainability goals, he says.

“Greening, [also called] sustainability or eco-responsibility, is not a fad,” Hodges says. “It is a major, long-term trend,” and any enterprise that wants to be successful must have a sustainability plan and partner with the right people to execute it. “That plan should explicitly address the role of IT, which is too often overlooked,” Hodges says. “CIOs are well-positioned to take a leadership role for the sustainability program of the entire enterprise and demonstrate their credentials as strategic thinkers.”

Cracker Software Found on Asus Recovery Disks

The recovery DVDs that come with Asus laptops contain software crackers and confidential documents from Asus and Microsoft, reports PC Pro.

The magazine verified claims by one of its readers whose antivirus software was triggered by a key cracker for the WinRAR compression software on the disk found in a file labeled “Freddy Cruger.”

The documents include an Asus PowerPoint presentation of problems the company has identified, including incompatibility issues.

The magazine also points to comments by a U.S. reader on an Asus forum who found someone’s resume on the disk. No doubt that person will need that resume now.

Nokia Tops Greenpeace’s Greener Electronics Guide

Nokia has climbed back to the top of Greenpeace’s Greener Electronics Guide.
Nokia scored a 7 out of 10 to take the top spot after having been saddled with a penalty point the last nine months because of its electronics takeback policies in India. According to Greenpeace researchers, Nokia now has the top takeback policy in […]

Relieve Homeland Security of Cyber-Protection Duties?

The federal government’s problems with cyber security usually make news for their cost, their failure or how much we’re not told about them or their remedies.
Now a panel is suggesting to Congress that the Department of Homeland Security really isn’t up to the task of protecting government against cyber attack, reports CNET. It’s criticized as […]

Mozilla Acknowledges EULA Mistake

Mozilla’s next Firefox update will not contain an end-user licensing agreement (EULA), reports Computerworld.
The open source developer has admitted it made a mistake adding a EULA to the Linux version of Firefox after several angry Ubuntu users complained about its inclusion in Firefox 3.0.1. EULAs are commonly used in Windows and OS X but are […]

10 Ways to Cut IT Operational Costs

Many organizations are still looking to reduce operational expenses while increaing overall efficiency. Here are 10 ways that you may be able to cut IT operations costs in your organization.

10. Take a look at IT process automation. Run book automation tools have matured to the point that they can streamline routine, but time consuming IT tasks. This can include just about any IT task where you can follow discrete process steps. See how these products could help you save time and money.

9. Co-Source. If you are not ready to outsource the management of your environment, try co-sourcing. In this arrangement, you still have the systems in your environment, but another firm will run and operate them. This way, if Co-Sourcing does not work out, you can easily bring it back in-house. These services can be far less expensive than hiring a cadre of full time employees and you can manage it as a service.

8. Software tool replacement. While it may seem extreme, several new vendors are offering many of the same capabilities as enterprise providers. You may be able to purchase new software and implement it for less than the cost of annual maintenance for your current vendor.

7. Invest in an SLA management system. If you are in an environment where you pay penalties for missing service guarantees, invest in a system that can proactively notify you of potential issues before they become violations. It could pay for itself preventing one violation.

6. Enable virtualization. Many organizations want to move more of their systems to virtualized ones, but still have not figured out how to deal with configuration, provisioning and overall management. If you deploy a virtualization management system make virtualization a reality, you will enable faster adoption of the platform.

Assessing Caprock Integrity Risks for SAGD and CCS Projects


Caprock integrity risks need to be assessed in the planning and operation of thermal recovery projects such as cyclic steam stimulation (CSS), steam assisted gravity drainage (SAGD), steam flooding and in-situ combustion. Over the past few decades in Alberta a number of steam release incidents have occurred where the primary mechanism involves the migration of steam or fluids along existing discontinuities in the caprock, or along, or adjacent to wellbores that penetrate it. An integrated risk management workflow, incorporating geological, geophysical, geomechanical, reservoir and well construction data, has been developed by Advanced Geotechnology to evaluate the potential for various caprock leakage mechanisms, and to aid in the selection of operating procedures and monitoring techniques to mitigate these risks. To see Pat McLellan’s recent presentation “Caprock Integrity: What you Need to Know for Thermal Recovery Projects” at a Canadian Heavy Oil Association/Petroleum Society technical meeting in Calgary, please visit the publications section of our website to sign up to download this talk. Contact us at 403.693.7531 if you wish to discuss how our risk evaluation process can assist your project team.

STABViewTM Well Planning Software Version 3.5 Released


The latest release of STABView, our flagship software product, is now available to licensed customers. We have added many new features and capabilities including: 3D wellbore visualization; yielded zone cross-sections for multiple failure models; an improved safe "mud weight window" display; tornado sensitivity plots; and several new sample cases. The STABView manual has also been upgraded and enhanced with many useful examples and explanations. For more information on these new features and other STABView capabilities click here. To request a demonstration of STABView, in person or over the internet via WebEx, please email us at software@advgeotech.com or call 1.403.693.7530

ROCKSBank Database, Version 2.3 Released


ROCKSBank is a worldwide rock mechanical and petrophysical properties database originally developed by Advanced Geotechnology in a joint-industry project. Our latest release of the software in January, 2008 has data on over 3600 samples including: sandstones (25%), shales, siltstones and mudrocks (31%), coal (25%), oil sands (5%) and carbonates (11%). ROCKSBank can be used for a wide variety of geomechanical, geophysical and reservoir engineering applications in the E & P business.

Gas Shale Geomechanics and The Montney Formation


Unconventional gas plays in Western Canada have been attracting a lot of investor and operator interest in the recent past. Unlocking these vast tight resources requires a mix of new evaluation, drilling and stimulation technologies where rock mechanics plays an integral, if not dominant role in some cases. Advanced Geotechnology has long been close to shale issues related to wellbore stability and hydraulic fracturing, and this same expertise is now being applied to shale gas reservoirs. For example, we are presently undertaking a project for one operator to interpret microseismic data collected in multi-zone Montney well stimulations, with a goal of determining the effects of natural fractures and in-situ stress in the setting. Recently we completed an investigation of the elastic properties of Montney and other gas shales and siltstones based on their dynamic log response using classic lambda-rho and other petrophysical analyses techniques and novel, small sample, static laboratory tests. In another location we are assessing the long-term feasibility of horizontal openhole completions in a Montney interval, in light of several recent hole collapse events. For several other operators we have assessed the feasibility of using underbalanced or managed pressure drilling technologies (UBD/MPD) in the Montney. If you would like to discuss what Advanced Geotechnology can bring to your exploration and production team for gas shales, do not hesitate to contact Pat McLellan, AG principal consultant at mclellan@advgeotech.com or 403.693.7531.