Wednesday 30 December 2009

The eCrime of the Decade goes unpunished

So finally, Gonzalez, the ‘mastermind’ behind the targeted Heartland cyber-attack SQL injection attacks that yielded around 150 million payment card details is being sentenced to at least 17 years in a US prison. To put this time in perspective, Gonzalez will serve about four seconds for every record stolen. His co-conspirators, believed to be in Russia, have yet to be apprehended, making this sentencing a hollow victory for the US justice system.

The Department of Justice Assistant Attorney General Breuer said that they “… will not allow computer hackers to rob consumers of their privacy and erode the public's confidence in the security of the marketplace,” adding, “criminals like Albert Gonzalez who operate in the shadows will be caught, exposed and held to account. Indeed, with timely reporting of data breaches and high-tech investigations, even the most sophisticated hacking rings can be uncovered and dismantled, as our prosecutors and agents demonstrated in this case.”

The reality is that the hacking ring has not been broken, and Mr. Gonzalez’ conspirators are free to continue their illegal activities. The technological vulnerabilities that allowed the Heartland breach to occur are still prevalent in the global IT infrastructure. Verizon has reported that these vulnerabilities are the growth exploit for cyber-criminals.

It would seem that enterprises and others should realize that they have a high likelihood to be hacked, given the prevalence of the vulnerabilities, and should take immediate precautions. Knowing that these vulnerabilities are present gives these enterprises a responsibility and obligation to protect their customers from the Gonzalez’ of the world, especially knowing only a few will ever be caught.

Wednesday 16 December 2009

Drinking "data" securely from Amazon’s Cloud

The folks at Amazon have announced a demand and supply based pricing for their cloud resources whereby it becomes cheaper per hour to run your enterprise applications when demand is low. My take on this is broadly positive, as it is getting closer to the true cloud model of “pay per drink” where the price of the drink is dependent on how many other drinkers there are (and the size of the barrel). All of this, however, is completely orthogonal to whether the drink is toxic or not (or whether other drinkers are not spitting in the barrel themselves).

Regardless of the pricing model for computing power, there is absolutely NO correlation with the level of security provided.

When drawing from any shared pool we need to ensure that all drinkers only use special drinking-straws with filters built in like those at istraw. Such "virtual cloud straws" are simply filtering firewalls that only permit "clean and safe drink" to pass the lips of the drinker.

Now, a security drinking straw that also runs in the cloud, is flexible and can be powered on a "pay per filtering" model fits the vision of the cloud. How would it be to provide a virtualized database firewall that runs in the Cloud – filtering out unwanted database accesses and keeping your database from being "sucked dry" or "poisoned"?

Watch this space.

Wednesday 9 December 2009

What’s ahead for 2010?

Verizon has issued an addendum to its 2009 threat report that shows how damaging SQL injection attacks have become in a short period of time. According to the report, SQL injection were used in 19 percent of the cases and accounted for 79 percent of the breached records. We expect SQL injection to be the means of data access in 2010, accounting for as many as 90 percent of all breached records if proper controls are not put in place.

Their report is titled "Data Breaches Getting More Sophisticated", but the reality is that the SQL Injection attacks obeying the 80:20 rule are the result of really "dumb" application development compounded by lax security and missing defenses. The headline should really read "Data Defenses Must Get More Sophisticated".

We are dealing with a quickly evolving threat ecosystem, and companies today need to take measures that assume the hackers will enter the network through the very applications that they have invested in. What provisions do you have in place that will stop the identification and stealing of information? If you can’t answer that question quickly and clearly, you may be in for a difficult 2010.

Tuesday 1 December 2009

Database Security taken seriously at last

The news that Guardium has been acquired by IBM has been followed with great interest by those of us in the database security industry, as you can expect. What makes this acquisition so interesting is its timing. In 2009, the general business community became well aware of what we in the data security industry have viewed as the common threat landscape for years – insiders, third parties, organized criminal gangs, SQL injection attacks, etc. A mixture of technological advancement and economic instability provided the perfect threat storm that was 2009. This, however, is more than a “we told you so,” moment.

With this acquisition, we see database activity and transaction monitoring becoming central to any organization’s security plan. Nick Selby has written a great perspective on why this acquisition marks a change in the overall security landscape and predicts increased quality, better -integrated components, and cross-enterprise security programs. (Note: Secerno is mentioned in the article). In the coming years, not having real-time knowledge of your database’s activities and the ability to block threats will seem antiquated – almost like a company not having a firewall. We welcome this next phase of the security industry – and it’s been a long time coming.

Tuesday 17 November 2009

The T-Mobile “Defense”

An old English proverb tells us that “There are none so blind as those who choose not to see.” Today T-Mobile are in the news for insiders selling-on customer personal data against U.K. Data Protection Legislation. T-Mobile claim the data was sold "without our knowledge".
The key word in this excuse is "knowledge".
  • What did they know about their data and the way it is/was used?
  • What did they know about the data leak?
  • What do they know that they are not telling us?
  • Did they, in fact, have any actual knowledge or did they simply choose not to look?
This is another case of a global organization simply choosing not to invest in processes or technologies to control data and database access. Having such security systems and publicizing them amongst staff are a powerful deterrent and are effective in cutting insider data breaches.
The UK information commissioner Christopher Graham is advocating custodial sentences for this type of abuse of personal data. Until there is sustained public understanding resulting in political pressure I doubt this will ever become a reality in the near term.

Perhaps it is not just T-Mobile who choose not to see – maybe it is us, the people, who let our personal information float freely, without truly understanding how it is used.

Wednesday 11 November 2009

SQL injection sees a big payout

Yesterday, prosecutors in Atlanta announced indictments against an alleged crime ring from Eastern Europe. The achievements of their hackers point to frightening means of financial data theft. According the reports, the hackers attacked payment processor RBS WorldPay, cloned prepaid ATM cards, and used them to withdraw cash totaling $9 million from 280 cities globally. These attacks took place in November 2008, and the timing is significant given that similar breaches of card data were occurring via SQL injection attacks.

In the Spring of 2008, fully automated SQL injection attacks were spreading rapidly – but the reports were focused on the visible outcomes and listed them as “i-frame attacks” rather than their root-cause of a database attack. At the time we warned that SQL injection attacks were both increasing and becoming more severe, moving to attacks whose purpose was to serve as much malicious code on as many web sites as possible. In the few months between that time and November 2008, the attacks moved beyond proof of concept and annoyance hacks to direct database manipulation and fraud. One year later, our call to action remains the same: all companies need to address the vulnerabilities the web environment poses to their databases. We recommend additional security precautions be added, so that SQL injection attacks are blocked, ensuring that the database cannot be used directly to mount a costly and embarrassing data breach.

Thursday 5 November 2009

First the telecos …?

Today brings news that the EU will require telecommunications companies to inform affected parties on data breaches. Although some would argue (and are arguing) that this measure should extend to all businesses – and we agree, eventually – the EU measure is a critical first step. Since the telecommunications companies and service providers have online components as well as the means to store vast amounts of customer data, starting measures with these groups makes sense.

We fully expect data protection measures to extend to different business types and industries, but these extensions should be done in a measured, controlled manner. The very worst thing that the EU could do is impose broad, blanket data protection measures that would affect all industries immediately. Historically, these measures (for example Sarbanes-Oxley in the United States) have created compliance costs and headaches that can be as difficult to maneuver as the problems they were intended to solve.

Rather than bemoan the fact that the measures are starting with the telecos, let’s look to this an important first move that is being done correctly and gives all businesses time to prepare for the inevitable cross-industry data protection measures that will emerge in the coming years.

Tuesday 27 October 2009

When Government is too much like the private sector

The Swiss foreign ministry has been hit by hackers, forcing its computer systems to be shut down for days. Details are still emerging but initial reports point to a computer virus designed to grab specific data that was well hidden on the network. What this attack shows is how attractive government computer systems have become to hackers, which makes sense given the amount of personal and financial data the government houses.

Governments may have fallen behind the private sector in the assumption that the network perimeter approach to protection will keep data safe. Attackers can easily bypass known weaknesses on the perimeter and, once in, use various means to capture information. All governments should assume that their information will be under attack at some point -- be it from individuals or foreign powers. They need to take protection measures that protect the data from inside the perimeter, given the ease with which these weaknesses are exploited. Governments have this protection model in place already, but it is usually reserved for staff or physical assets, and involves additional layers of protection inside the perimeter. The government needs to give its data the same levels of consideration.

That they have temporarily suspended internal access to the internet for the Government departments would suggest that they need to choke off the malware from sending data out. Alas, once a site has got to this state it can be difficult to clean up. Real defense requires preventing the information "misuse" from being established in the first place.

Sunday 25 October 2009

Find a job and lose your identity

Job seekers using internet employment sites have been warned that their personal information has been compromised. The Guardian newspaper's Job site has contacted users posting their details about a breach. The information stolen would be sufficient for a criminal to fraudulently open bank accounts and apply for credit cards. This is not the first time job sites have been hacked with 1.3 million records stolen in this episode.

One of the key functions of job sites is to act as a trusted intermediary between those with jobs and those without – acting like a matchmaker. Discretion and confidentiality are taken for granted. Perhaps it is time for websites to make it clear to users that the site provides no guaranteed care of the sensitive information users are asked to entrust.

In the current job climate, it is careless for organizations to put any data at risk, let alone that of their customers’ future employees.

Friday 16 October 2009

Oh no, not again: Data breach phase two

It appears that US-based payroll services provider PayChoice has experienced the second phase of a very coordinated data attack. Last month, the company experienced a breach in which customer user names and passwords were stolen, and it appears that this information was used to trick customers into downloading malware. The download allowed criminals to add fraudulent employees and associated payrolls to the accounts of PayChoice customers. The details of the second phase of the attack are still emerging, but what happened at PayChoice shows the need to have added protection around sensitive data, even from people who are seemingly authorized to use it.

Once criminals have access to an account via an authentication method, they can manipulate the data as though they were a trusted user. Many times, the activity is not caught until well after the breach or theft has occurred because the system is operating under the assumption that it is getting orders from an authorized user. What PayChoice points to is the need to have a granular view of what is going on with data at all points and for all transactions.

With the proper controls in place, PayChoice would have been alerted to suspicious activity – in this case, apparently adding false employees to payroll accounts – and had the ability to block it.

Wednesday 7 October 2009

The dirty little secret your bank may be hiding

This summer Actimize found that nearly 80 percent of financial institutions worldwide say the insider threat problem has increased in the wake of the economic downturn, with only 28 percent of the banks surveyed not suffering an insider breach. Surprisingly, the majority of the breaches are coming from what the industry calls “trusted insiders,” full-time employees with access to data. Interesting also is the fact that the recession has caused many employees to cross the line. Some are in financial need, and others are resentful of longer hours or expanded job responsibilities due to lay-offs.

The typical response – reduce access to sensitive data – is difficult to do in the financial services industry, in which access to customer and company information is a necessity to do most jobs. The answer needs to be broader and needs an accompanying change in attitude. Banks, like any organization, should assume that their data is under threat from insiders and should take steps to ensure their protection measures are in line with this thinking. Some examples would be blocking large amount of data downloads, stopping downloads during off-hours, and preventing certain types of changes. The technology is there and, unfortunately, today’s threat environment demands this level of protection.

In these tight economic times, organizations must not take extra risks by reducing IT security budgets.

Wednesday 30 September 2009

Are Clouds Compliant?

Today I was part of a panel where we debated whether Clouds are compliant. The session was part of the BrightTalk online Cloud Computing Summit and was hosted by Peter Judge, UK Editor, eWeek Europe. I was joined by IBM's James Rendall, and Paul Roberts of The 451 Group and we participated in a lively session.

The questions we worked through were:
Q1. Do you trust the cloud?
Q2. Are clouds compliant?
Q3. Is compliant a barrier to adoption?
Q4. How should we make clouds compliant?

There was online polling of the audience with clear majority responses being as follows: Q1 – No; Q2 -No, and Q3 - Yes.

Question 4 was the most interesting for me. We actually debated two courses of action. First, "Change clouds to accommodate regulation". Alternatively, "Change regulations to accommodate clouds". A cheeky 25% of the audience voted for the latter! This is like raising the speed limit on the roads because it is impossible to stop motorists from speeding. Is this a good idea?

I cynically pointed out that the underlying context is not peculiar to cloud -- and is commonly observed in other computing architectures. IT is a business enabler – and businesses want to make profits. Once a profit making system is in place, it is only then that organizations get concerned about compliance and security issues. Alas, the elasticity and remote nature of cloud infrastructures make retro-fitting security devices (e.g. firewalls) nearly impossible. The only way to achieve the retrofitting of security into the cloud is if you can make the security technology ‘cloud hostable’ and have it inserted seamlessly into the underlying fabric. Perhaps a DataWall for the cloud – watch this space.

Wednesday 23 September 2009

Why hack a database when the data is being given away!

Here at Secerno we spend all our time helping our customers protect databases to ensure that they keep their precious data safe. For an Internet Service Provider (ISP) like the U.K.’s Demon, precious data includes username and password information that their customers use to access services. Something certainly worth protecting!

Imagine my horror to learn that "Demon's director of customer service" has emailed 3,681 of their customers and attached the list of user details for the 3,681 customers. This is not an attack to steal data – this is an appalling example of a data leak caused by “human error”.

In the paperless office empowered by an IT world this sort of thing is so easy to do. Imagine “accidentally” stuffing printed client lists in a paper-based mail-out to customers in the good-old-days of paper systems. Unlikely. Let us wait and see if there is any legal action. The U.K. legal framework lacks the teeth to really bite.

Data security goes beyond defending against malicious attack – it also must defend well intentioned fools.

Wednesday 16 September 2009

Data Breaches: The way to a corporation’s data-heart is through their applications-stomach

Again we learn, that like the old adage “the way to man’s heart is through his stomach”, “the way to a corporation’s data is through their applications”. A hacker announced that he was able to get through to the RBS WorldPay Database via a SQL Injection vulnerability in one of their web applications. This is nothing new.

Last week the CEO of Heartland Payment Systems, Robert Carr, highlighted that it is not just web applications that have the flaws. The breach, that ultimately had more than approximately 130 million card numbers leaked from Heartland’s payment systems, was actually initiated through an unrelated corporate application. This too, was exploited via SQL Injection, allowing the attacker to use the database to get a “position” on the network from which undetectable-malware delivered a sniffer that was installed to collect passing card numbers from the card payment system.

Heartland had many penetration testers and certified security auditors (including PCI QSAs) constantly crawling all over their systems – even after they had learned of the injection attack. They had been reassured that their card data was still safe for many months. Alas, history tells us that they had a false sense of security – until they went looking for the sniffer based on lessons learned in the Hannaford Brother's data breach.

Now – like Heartland – the initial claim of RBS (owners of WorldPay) is that no data was leaked in this recent exploit. How long will it be before we learn otherwise?

Wednesday 9 September 2009

What’s in a number?

Today, the Ponemon Institute revealed that 67 percent of French organizations have been hit by a data breach incident over the past year, with 18 percent having more than five incidents. If this seems high, it is with reason. According to Ponemon, only 8 percent of these breaches were reported, so we never heard about the other 92 percent because there was no legal or regulatory mandate for reporting them.

The issue of reporting and disclosure is hotly contested, oftentimes pitting the rights of individuals against corporations that want to distance themselves from the bad publicity and associated liabilities. The United States, which has seen some of the largest data breaches in history, still does not have a single standard for data breach reporting or regulatory data protection requirements.

We can’t expect companies to willingly disclose data breach information – the consequences are too severe, even though the full disclosure will work to their benefit over time. What needs to happen is the same focus on transparency that is being heralded in the financial services industry should be applied to data breaches, with the primary goals being catching those responsible and informing those affected as soon as possible. This transparency will come, at the very least because certain industries will require it. In the meantime, we take solace in the 71 percent of the Ponemon respondents in France, who placed data protection as a critical component to their overall protection plan. These companies are not completely overlooking data protection but they are playing catch up (as are most companies these days) to very sophisticated hackers.

French cuisine may be famous for rich sauces. It is clear from this report that they are a rich data source too!

Thursday 3 September 2009

Declaring war on easily attacked applications

Today is the 70th anniversary of Britain’s declaration of war that brought them into the Second World War. With news of yet another business application being poorly defended allowing its database to be attacked we should declare war on poorly written applications.

In the Sears.com case, it was very poor application design which made the attack possible. Foolishly, the functionality that is meant to protect the web-application was deployed in the least trustworthy of locations, the “customer’s” web browser. On the internet, “customers” and “attackers” are indistinguishable. As Sears.com has no control of the “attacker’s” browser it has not reason to trust that this inadequate security control mechanism will not be altered, tampered with or completely disabled.

Further poor practice meant that there was no independent monitoring or enforcement system that prevented or even alerted the strange behavior of a massive increase in certain functionality that allowed a “customer” to “stage a brute-force attack that could grab all valid, active Sears and Kmart gift cards from the company's database.

What should organizations do?
1. Raise the level of software engineering so that secure development processes are embedded in all application development.
2. Test, test, test, test, and re-test the application for vulnerabilities. Remember, these are vulnerabilities that are in applications that your developers wrote – not just operating system or platform component vulnerabilities.
3. Put monitoring and enforcement systems in place to fully understand what normal usage patterns are for the application and ensure that real-time policies can prevent unwanted behaviors.

Finally, organizations should make an open declaration of war on poorly produced and operated applications!

Tuesday 18 August 2009

The aftermath of the largest data breach ever

Two unseen computer users in Russia along with a colleague in Miami decide to set up a sting. They are after the millions of credit card numbers stored across retail servers. The US person does reconnaissance at the stores to see what type of protection they have. The team then cross-references this information with the types of protection reference on the companies’ web sites and starts a series of strategic attacks to gain entry to the networks using SQL injection, which exploits a vulnerability in the database layer. Once in, they place sniffers and malware on the network, capturing credit card data and sending it to servers in the US, the Netherlands and Ukraine. They communicate by IM, use proxy servers, and change their online identities frequently. Over the course of two years, they steal 130 million records, the majority of which is sold. What sounds like a hit summer movie is, in actuality, the detail outlined in an indictment released today in New York against the hackers who breached Heartland, among others.

If we look at this breach as a clever group of renegades, we are missing the point. These breaches show the value our financial data holds and how little control we ultimately have over it. Before we get dazzled by the locations, methods and number of credit cards hacked, we should ask why the data was not encrypted or did not have other protection mechanisms in place.

This type of defect is all too prevalent in the low quality IT systems in which we blindly give our trust. We can be sure that the biggest breach is yet to come!

Saturday 15 August 2009

The University Data Breach Blues

This week brought news of another successful breach at UC Berkeley, in which almost 500 records of applicants were stolen by hackers. This is the second such reported hack at UC Berkeley in less than five months, with the earlier hack exposing 160,000 records. These two attacks point to the attraction that universities hold for hackers. Every university requires personal data as part of the application process, and hackers know that these locations guarantee some amount of valuable data. Unlike financial services companies or many retailers, universities lack the most sophisticated data protection measures. They also do not have compliance standards for data housing, making them uniquely attractive to hackers.

The Open Security Foundation, a nonprofit that tracks data breaches, estimates more than 11 million records stored at US colleges and universities have been compromised. Many times, these breaches are not discovered until well after the data is lost. UC Berkeley, for example, found out about this current breach from an alleged hacker’s website.

We have entered a world in which personal data is always at risk from hackers who will grab and sell it for profit. Retailers and financial institutions have felt the pain of protection in this environment, and they have the latest technology as well as compliance measures for protection. What will universities do, since they do not have the same financial resources?
The answer could come in part from compliance guidelines, with government and the private sector working together to suggest best practices and protection measures. Doing so should allow graduates to enter the post-university world with their data -- and credit reports -- uncompromised.

Friday 7 August 2009

Easy-to-use or Easy-to-lose? Health-care call-centers sneezing over our private data

The risks of inappropriate data handling in health-care call-centers has been raised again in the press recently. It is clear that there needs to be some conduit for this sort of highly personal information as companies like insurers constantly need to utilize the information.

However, call-centers present a significant risk to data privacy. The bulk of the work in a call center is performed by low paid, low skilled telephone operatives in an industry where 30% annual turn over of staff is consider exceptionally low. One researcher suggests that in the banking sector in Scotland, the annual staff turnover is more like 80%. Worse still, police investigations have shown that call centers in some industries are routinely infiltrated by members of criminal gangs whose aims are to get copies of valuable data.

The original article acknowledges the insider and external threats, and states “Agents must have “easy to use” and reliable means to send and receive confidential…” information “ … inside the firewall, as well as outside.” We all know that “easy-to-use” systems and “secure-systems” rarely go together. With the low skills level and high staff turn over, my guess is that “easy” trumps “secure”. Perhaps we should associate "easy-to-use" with "easy-to-lose" (data).

Do you want your precious health-care record sneezed on and transmitted “un-healthily” around a call center?

Thursday 30 July 2009

Heartland Payment Systems: What went wrong and why we need the Transportation Safety Board for Data

I am really looking forward to hearing the CEO Heartland Payment Systems discuss the details of the intrusion that stole masses of payment card data. Bob Carr will be giving a webinar on the topic and the subsequent steps they have taken.

Consider what would have happened if this were incident that had occurred in the airline industry. We would expect a team of accident experts from the TSB to perform a detailed open enquiry into the matter, come to strong conclusions about what went wrong and why. They would also disseminate all new knowledge and best practice to be mandated throughout the industry.

For a data breach we get is a webinar. Simply “candid” webinars with CEOs of companies who have been breached is just not good enough. We need publicly funded and accountable organizations to pick through the burning rubble of yet another data breach and force the industry to improve.

Openness and learning lessons from the mistakes of others is the norm in aviation. Aviation regulations are also much more rigorous and are often written through exceptions. For example Rule Zero: No one may fly. Rule One: ... except if you are a registered compliant airline. Rule Two: ... except …

The data world needs similar culture and framework. Perhaps we could start with– Rule Zero: no organization may hold data. What do you think Rule One: should be?

Monday 27 July 2009

Let the Script-Kiddies Loose – One click database hacking

Should we panic? Researchers at Black Hat have just announced plans to package their hacking knowledge so that it can become part of a generic vulnerability exploit framework Metasploit. This will definitely lower the skill level needed to attack databases so that even a script-kiddie should be able to use it.

The reality is that ‘yes’ databases are very vulnerable and that system's owners from database administrators to network security professionals should be taking action. However, the deployment posture of databases changes their threat landscape – especially for external attack. Very few databases are directly connected to the outside world as they are nearly always connected via-applications. So in this situation the standard 'pre-packaged' exploits can have limited efficacy.

It is well known that the application layers are the worst offenders for holding exploitable bugs and security issues. As each application has been built to do a specific job, each application has its unique set of security issues. With some access to the application, it does not take much effort from an attacker to get at data they should not. Web sites are simply applications that provide access to everyone – including external attackers. Databases are also threatened by insiders often using their own login account. Sometimes data is accessed inappropriately simply out of curiosity rather than out of malice whilst highly privileged users make accidental mistakes resulting in corrupted data.

So back to my question at the top – “should we panic” about script kiddies attacking our databases? No – but we must not believe our databases are effectively secured. We must take calm and consistent measures to pro-actively defend our databases from poorly written applications, nosy internal users, and the bumbling error-prone high-privileged administrator.

Friday 24 July 2009

Alico: the company is always the last to know

Today, news is emerging of a credit card breach with the Japanese arm of global insurer Alico with the credit card data of approximately 110,000 customers affected. Of those affected, more than 1000 customers have seen fraudulent charges on their credit cards, and the credit card companies alerted Alico to the alleged theft.

An Alico spokesperson said that the company has yet to determine how the data could have been leaked. This statement and the fact that credit card companies alerted the company to the breach shows how difficult it can be to determine how a breach occurred, even if you know that one did occur.

This breach brings to mind RBS WorldPay and Heartland, in which customers saw fraudulent charges on their credit card bills before the companies realized they had been breached. As Alico looks for the source of the breach, we are also reminded that in this threat environment, personal data is constantly under attack. The link to criminal elements shows that these breaches are done with the express intent to grab personal financial data to be used fraudulently. In this type of situation, Alico is left playing “catch up” without the ability to stop additional damage to its customers, because their data has already been compromised. We hope that the company and all in the industry use this as a lesson as to the importance of knowing the location and status of their data at all times because it will always be an attractive target.

As they say, "an ounce of prevention is worth a pound of cure" -- now is the time to apply preventative measures to protect data.

Wednesday 22 July 2009

The cost of losing customer data: (only) £3 million for HSBC

Today, three units of HSBC were fined £3 million for losing customer data. Two of the breaches affected more than 180,000 people, and, although no customer has reported any loss from these incidents, the Financial Services Authority is sending a strong message to all UK financial services firms. At issue is how careful HSBC was with the customer data, rather than the outcome relating to these breaches. This last point should be resonating with all financial services firms, as well as those that handle customer data. No HSBC customer experienced a loss from these breaches, but the Financial Services Authority has still called the company to task for being careless and for failing their customers.

For all financial services firms, especially, this ruling should be given strong consideration. If these types of breaches can occur at the world’s largest bank and the Forbes’ ranked sixth largest business in the world, then they can happen anywhere. Details from the breaches show an alleged environment in which poor employee training in preventing and dealing with identity theft as well as lax encryption standards prevailed. These factors are part of “Identity Theft 101,” meaning that even the smallest, most regional bank, mortgage company, or insurance firm would make sure these controls are in place. There is some good news for HSBC in this. By cooperating, they have seen their fine reduced from a potential £4.5 million, savings that can be used for better protection.

The bad news for all of us is that the fine was really insignificant in the grand scheme of things. A few million pounds is still only loose change for these organizations, even in these times.

Thursday 16 July 2009

Why, even after Twitter, the Cloud is safe. Secerno weighs in

This week, the after effects of the Twitter‘s May breach became known, with confidential employee and company information being acquired and sent to TechCrunch. At a macro level, the information shows the potentially embarrassing data that exist in every company and that no executive, shareholder, customer, employee or partner would want to see revealed.

For Twitter these include names of senior executives who interviewed for positions at the company and are currently employed elsewhere, earnings projections, new product information, and floor plans.

What is interesting to those of us in the security industry is what the breach initially appears to be – a security failure in the Cloud, and what it really is – an exploit of the password recovery system and other features of Google Apps.

Cloud is no more secure or less secure than any other environment, and what happened at Twitter could have easily occurred in a traditional implementation. This breach indicates, at the very least, that traditional password protection practices were not being followed. This is not surprising considering the stress placed on current IT budgets that results in security updates and practices being delayed. For every organization that holds information that could be deemed embarrassing if made public (so, everyone), Twitter serves as reminder that open does not mean secure and the protection needs to come from provide the appropriate care at the level of the data itself.

Thursday 9 July 2009

Unification – an important lesson from Ponemon

Our good friends at the Ponemon Institute have issued some important and sobering findings, indicating that 70 percent of UK organizations have experienced at least one data breach in the past year. Equally alarming is the fact that less than half of these breaches were made public, as there was no legal or regulatory requirement for disclosure.

The report finds a direct correlation between an organization’s likelihood to experience data loss and its lack of a consistent, organizational-wide strategy and enforcement of data protection and encryption policies.

Many of the organizations surveyed indicated that data protection was among its top priorities, but what caused many to fall victim to data loss was the lack of a unified, consistent approach to protect data that would apply across every access point and device in an organization. This approach will quickly fade over the coming months. Legislation, increasing awareness, and the requirements of standards like PCI-DSS will cause companies that have not undertaken a unified approach to consider it – strongly.

If outside pressures are not enough, then financial ones will be. Recent research by Ponemon found that the average UK data breach costs a total of 1.7 million pounds Sterling; the equivalent of 60 pounds Sterling for every record compromised.

These numbers are too costly for us all. Personally, I don’t care about the cost – I simply want my data held safely!

Wednesday 8 July 2009

Is North Korea really to blame?

The security world, government agencies and many others are abuzz at reports that North Korea is behind a series of powerful cyber attacks targeting government agencies in the US and South Korea as well as a host of other organizations, including Nasdaq and the Washington Post.

The timing would indicate North Korea, which has been increasingly aggressive in its dealings with the United States over the past few weeks. However, it is doubtful that North Korea has the ability to launch this devastating an attack on such a large scale.

The question, then, is who would plot and execute this type of strategic hit at two major world governments, as well as some very well-known companies? The answer might be found in a series of cyber attacks that US and UK government organizations endured in the middle part of this decade.

At the time, both countries were complacent in their security measures, without realizing that their actions were being monitored by entities that launched extremely targeted attacks to penetrate their systems. It took two to three years before the details and those purportedly behind the attacks were revealed outside security circles.

Today’s situation is analogous in that it will take at least that long before we realize the “who” and “why” behind these attacks. Until then, the news reminds us how important and continuous our efforts to protect government and private data must be.

Monday 29 June 2009

The UK's cyberspace initiative

The creation of the cyber security operations unit by the British government is a necessary and positive step in combating all forms of cyber threats. We work and live in a world in which our most personal information has gone digital, and this initiative points to the importance of a centralized approach to protection, spanning citizens, government and industry.

There are also economic considerations that this initiative addresses. In the UK, for example, more than £50 billion is spent online every year and 90% of high street purchases are made using electronic transactions.

There will be much debate as to the validity of the threats and the forms that they will take; however, as members of the security industry, we know that these threats always have the ability to be more devastating and widespread than even popular imagination can dictate.

By placing the protection of "digital Britain" in the hands of the government, we are showing a united front against cyber-criminals, cyber-terrorists and the run of the mill hackers who pose a threat to our information systems and personal data. As we commend the government for taking this bold and necessary step, we would like to remind them of a lesson that industry has learned over the past few years: threats come from internal and external sources. So, a "defend the perimeter" approach will leave valuable assets unprotected.

The government should look at the threat matrix holistically, starting from the databases that hold information, to the individuals that access it, through the networks that carry the data, to the perimeter. This "ground-up" approach will ensure that we are well protected at every turn.

Tuesday 16 June 2009

View from the Tower

Today, the TowerGroup has suggested the financial services industry stands on the losing side of the battle to protect consumer data. TowerGroup analyst George Tubin believes that the majority of data within financial services institutions has been or will be compromised, because proper data protection measures continue to be overlooked. With Heartland, RBS WorldPlay, Checkfree and BNY Mellon Shareowner Services making headlines with major breaches in recent months, it all suggests the industry needs to make data protection a higher priority.


Consumer anger, embarrassing headlines and the threat of legislative involvement have not stopped data breaches in the financial services industry and nor could they, sadly. In this turbulent economy, the last thing the industry would say it needs is legislative action or another protection standard to contend with, but they should take the four months as a very serious wake-up call if they are to avoid these outcomes. These companies need to re-evaluate how they protect and store data. With each breach, mandatory legislation becomes moves a step closer. The irony is that, as we have seen with PCI-DSS, these standards bring more cost and headache than protection. The industry cannot afford this on many levels. Any financial services firm that is not evaluating their data protection measures with a forward-looking plan in place, therefore, brings the industry closer to a mandatory protection standard.


Paul Davie

Thursday 11 June 2009

Careless Talk - Part 2

Earlier this week, the Internet buzzed with rumors about a hack at T-Mobile when the alleged hacker posted information on the security forum Full Disclosure. T-Mobile has now confirmed that the posted information is from one of its documents, but it denies that the information was obtained through a hack and says that no customer information was compromised. This is great news for the company. It's even better news for their customers. But it also points to the most common threat to an organization’s data: the corporate insider.


We have no knowledge of how this information was obtained at T-Mobile, but in an industry that has many employees, contractors, third-party suppliers and partners all with access to a wealth of customer data, it should be no surprise that an insider is very likely involved. It was predicted. Telecommunications service providers have long taken the “defend the edge” approach to security, with a focus on keeping threats off the network. This makes is more difficult to monitor and block an insider from accessing information. For all carriers, assume that your data is under scrutiny from the inside as well as outside and take this week’s happenings as a call to action.


Paul Davie