Category Archives: Interview

GCHQ – General Chit-chat, Hazy Questions?

Photo by Jenny Mealing (jennifrog) used under Creative Commons.

Yesterday’s questioning of intelligence chiefs by Members of Parliament is a first in British history. The momentous occasion was preceded by anticipation that the three big authorities, MI5, MI6 and GCHQ, would offer an open and transparent account of the extent of their surveillance operations, in particular GCHQ. While mass data collection has been suspected, or in a few cases disclosed, for some time by the UK security agencies. However, I was struck by how little new information was actually shared and by the disappointingly weak line of questioning. One important area, for example, which wasn’t clarified at all was how the practice of sifting through who is a ‘threat’ and who isn’t is qualified, neither was the deliberate and systematic undermining of international cryptographic standards. The responses in the areas of “mass data collection” even appeared to give the lie to earlier assurance that only metadata was collected and that content never was, yet that area was never explored,. This assurance has now given way to a somewhat disingenuous assurance that “the people who work in GCHQ” would simply do not loo at the content, unless sufficient justification exists. In fact, they would “leave the building” if they were asked to “ Snoop”… Maybe part of the obvious disconnect is that those earlier assurances came from politicians themselves rather than the intelligence community.

For any commercial entity the Data Protection Act regulates and governs processing of personal information. Intelligence agencies and law enforcement, of course,  benefit from a number of exceptions from those same rules, so it has been left indefinite who in the back rooms is looking out for the interests of the general public. A vague personal assurance that data belonging to “non-threats” are not viewed and that candidates for GCHQ would not be employed if they were the sort to be tempted to do so, is not the same as a bound contract within a legal framework. Besides, somebody must have trusted Edward Snowden in a similar way at some point…

Speaking of Snowden, it would have also been helpful for some questions to have been asked to shed light on the relationships between GCHQ and foreign intelligence agencies; do they accept requests from other nations to surrender their data to UK citizens? A recent report on mass surveillance of personal data that came to light on the same day as the inquiry shows that NSA sent millions of records every day from internal networks to data warehouses at the agency’s headquarters. The US National Security Agency (NSA) is clearly working in collaboration with GCHQ, just how much is UK law helping the NSA to circumvent US law and vice versa and what is the relationship here? Just for example, how does a contractor in the US, to US intelligence services, end up with access to so much highly damaging sensitive information about British spy agencies?

It will be very interesting to see how the requirements of the security agencies, which were voiced in the February 2013 response to the Draft Communications Data Bill, (Intelligence Committee response, “Access to communications data by the intelligence and security Agencies (PDF)“) influence the next draft of that same bill. The somewhat chilling conclusion of that Intelligence Committee response includes the statement that:

“Any move to introduce judicial oversight of the authorisation process could have a significant impact on the Agencies’ operational work. It would also carry a financial cost. We are not convinced that such a move is justified in relation to the Agencies, and believe that retrospective review by the Interception of Communications Commissioner, who provides quasi-judicial oversight, is a sufficient safeguard.”

Of course there will be further sessions both in camera and hopefully more public questioning. While it is clear that, in the interests of national security,  many aspects of surveillance programmes cannot and should not be revealed; the level of public trust in the very people that have been charged with protecting our liberty is at such a low that only unprecedented steps stand any chance of restoring our faith.

It seems we truly do live in Interesting Times, which is more often that not, a curse.

Traditional AV Testing: File under ‘Irrelevant’

ZDNet recently posted a video interview with me about the current state of the threat environment and the way forward for security.


 
I explained that Trend Micro had previously declined to participate in some high-profile AV tests. We felt that these tests didn’t match the reality of how threats infiltrate organisations and arguably give a false sense of security.
 
Typically, what happens in these traditional tests is that a file repository is loaded up with a collection of different viruses, Trojans and other malware. The security software is then installed and updated, disconnected from the Internet and set to work trying to detect malware. The headline scores are then generated according to the percentage of those malicious files that are successfully identified.
 
Testers would argue, I suppose, that this creates a level playing field in which to compare different software solutions. I can understand that, but it really doesn’t reflect the threat environment in real organisations, or for consumers. The most common threat vector now is the Internet; the second most common is malware downloading other malware via the Internet. Infected web pagesPDFssocial networking sites and cloud-based services represent just some of the significant real or potential threats that aren’t replicated in the traditional lab-based test environment. Traditional tests focus on the file – can this security software correctly identify this file?
 
A more holistic approach is necessary. Malware and other threats arrive through various channels and to be honest, once they have arrived then some part of your security solution has already failed. And it’s not necessarily through people breaking the rules. An email arrives from your CEO asking you to check out a web site. I’d suggest that most people will click on that link. What a good security solution should be doing is asking a series of questions on your behalf, questions that aren’t just about viruses but your security as a whole:
 

  • Is this email really from your CEO?
  • Is the link it contains hosted in a bad neighbourhood or does it contain suspicious elements?
  • Have we seen other examples of this same mail elsewhere recently?
  • Is it trying to deliver files or prompting to change settings?
  • Are those files bad?

 
The list can be almost endless, but traditional testing looks at what happens at the last line of defence. It asks one question: a bit like leaving your doors and windows open and unwatched but attaching a burglar alarm to the jewelry in your sock drawer. We believe that a security system should kick-in at the first link in this chain of events, not the last. No solution is 100% reliable at any level, but if you have multiple levels of control, each of which informs the others, then so much the better your chances of avoiding any compromise. Prevention is significantly better than a cure in such situations.
 
Going forward, a move to holistic protection networks and the centralisation of threat signatures is inevitable – new threats are detected every one-and-a-half seconds and as this trend continues, a solution based on signatures downloaded to client machines could neither keep pace, nor allow your machine to continue operating at the performance level you would expect while it’s attempting to do so.

An interview with HackersBlog

UPDATE: A couple of days after this interview, HackersBlog released the details of their latest succesful compromise, Tiscali UK. Once again, access to user data, including username, firstname, surname, company, telephone, regdate, lastlogin, email and hashed password.

 

 

 

After many high profile compromises over the past few months, the Romanian hacking project HackersBlog United is rapidly gaining visibility on the web security scene. The recent web site compromises that HackersBlog lay claim to include; Kaspersky, F-Secure, Symantec, Bitdefender, Second Life, Facebook, Hi5, StayFriends, International Herald Tribune, Yahoo!, The UK National Lottery, UK newspaper The Telegraph and most recently British Telecom

 

 banner

 

 

HackersBlog operate under their own code of ethics that mean that they will not expose website problems in public that have a high risk of exploitation, they will not save or distribute private data from compromised web sites, and they contact the website owner with details of the vulnerabilities exploited to allow them to carry out the necessary remediation (full code of ethics here).

 

I decided to contact the group to find out a little more about how they operate, why they do what they do, and importantly to ask them for any general advice that can help everyone provide a more secure online experience for their customers.

 

I have left the answers below exactly as they were received. I think you’ll agree that even the most high profile website can learn from the compromises detailed on HackersBlog. Perhaps the biggest lesson to keep in mind though, is that without proper regard for security as an integral part of the design process, we are all potential victims.

 

How long has your group existed, why did it come into being and what motivates you to continue?

We are coming from romanian “blackhat” teams that used to compete against each other. We united for a better purpose, that of informing the public of the dangers on the internet.

Is anonymity necessary for conversation or are you safe from prosecution simply because of a lack of international co-operation around cybercrime?

No comment.

We have seen you target security vendors recently, a newspaper, and now telecoms companies, is there a method behind your choice of targets?

We dont have an agenda. Usually, when we find a vuln in a website, we try to show that their competitors can face the same problems. We dont like to spend too much time diggin vulns only in one type of websites but rather try to diversify and enlarge the spectrum of our research.

On average what ratio of “successes” do you have when attempting to compromise professional enterprise level web sites?

Lets look at it from a different perspective. We are using only very well known methods and therefore the return is somewhere around 15-20%. If someone is using blackhat techniques the results can grow exponentially since the ethic would not stop that person in his doings.

What are the top 5 “schoolboy errors” made by the professionals when designing or securing their sites, errors that you really shouldn’t be seeing?

When the attack is manual (without making use of certain softwares used in scaning/verifying vulns) the error messages generated by the site are of crucial importance to the attacker. One of the main issues here is that coders forget the error reporting activated.

Another serious mistake is “trusting” the data coming from the user (forms and such) as being genuine without further verification.

Another factor that cannot necesarly be taking as a mistake but which we believe can generate problems to the website or the server where the site  is hosted is the presence in the links, of the parameters in their “normal” form. For instance:  .php: ?parameter1=val1&parameter2=val2. A whole lot of “vulnerability scanners” search the web for sites with this kind of parameters because they are easily identifyable and can the be tested in the hope of finding security holes. Instead, if the parameters would be included in a “SEO friendly URL”, such as: /articol-23.html, those scanners would fire in the dark because the link will not have a standard structure anymore: .php?p1=v1&p2=v2.

Based on these “mishaps” and along with many others we can outline the most common vulns found on the web: Cross Site Scripting,  SQL Injection, Local Path Disclosure, Local File Include and Remote File Inclusion, Remote Code Execution… Of course, this is just a short list and there are more solutions out there, available to anyone.

Do you think that companies are getting smarter about securing their online assets as time goes on or have no lessons been learned in the time that you have been active?

It is too early for us now to opinate about this since our presence online in this format (whitehat) is not very old. However, anyone who has to deal with online security can confirm that sites are safer and better protected now then they were a few years ago, also because there were people and companies out there who pointed out the problems they found.

Kind regards

2fingers