The history of the botnet – Part III

This article is the third and final part of a series, the first part can be found here.

Used under creative commons from jerekeys Flickr photostream


 
Lost in the white noise.
 
Since the second half of 2007 criminals have been abusing the user-generated content aspect of Web 2.0. The first alternative command & control channels identified were blogs and RSS feeds, where commands were posted to a public blog by the criminal and the bots retrieved those commands through an RSS feed. Likewise, output from the infected machines was posted to an entirely separate and legitimate public blog for later retrieval by the command & control server, again over RSS. As Web 2.0 services have multiplied and even gained a certain level of acceptance within the enterprise, criminal innovation has continued apace. Compromised, otherwise innocent, servers in Amazon’s EC2 cloud, for example, have been used to host configuration files for the ZeuS bot. Twitter has been used as the landing page URL in Spam campaigns, to attempt to overcome URL filtering in email messages. Twitters, Facebook, Pastebin, Google Groups and Google AppEngine have all been used as surrogate Command & Control infrastructures. These public forums have been configured to issue obfuscated commands to globally distributed botnets, these commands contain further URLs which the bot then accesses to download commands or components. The attraction with these sites and services lies in the fact that they offer a public, open, scalable, highly-available and relatively anonymous means of maintaining a Command & Control infrastructure, which at the same time further reduces the chance of detection by traditional technologies. Whilst network content inspection solutions could reasonably be expected to pick up on compromised endpoints that are communicating with known-bad sites (C&C), or over suspicious or unwanted channels such as IRC; it has been historically safe to assume that a PC making a standard HTTP GET request, over port 80 to a content provider such as Facebook, Google or Twitter, even several times every day, is as acting entirely normally. However, as botnet owners and criminal outfits seek to further dissipate their command and control infrastructure and blend into the general white noise of the internet, that is no longer the case.
 
Of course we can fully expect criminals to continue this unceasing innovation as we move forward, more botnets will take advantage of more effective peer-to-peer communication, update and management channels. Communications between bots or between bot and controller will become more effectively encrypted perhaps through the adoption of PKI. Command and Control functionality will be more effectively dissipated, using cloud services peer-to-peer and covert channels though compromised legitimate services. Spamming capabilities will be enhanced. Botnets, such as the pernicious Koobface, already use social networking services for propagation by sending messages and making posts, we can fully expect to see social networking Spam capabilities being added to bot agents in the very near future
 
Where do we go from here?
 
So what can we do, is all hope lost? Not entirely I would argue. The battles continue in a war that must be waged on several fronts; governments and international organisations such as the EU, OECD and UN need to provide a strong focus on the harmonisation of criminal law globally in the area of cybercrime, enabling more effective prosecution. Law enforcement agencies need to formalise multi-lateral agreements to tackle a crime that is truly trans-national. Internet Service Providers and domain registrars also have a key role to play. ISPs should be informing and assisting customers that they believe to be compromised (a trend which happily appears to be on the increase). They should also be terminating service to customers they believe to be acting maliciously. Domain Registrars should be demanding more effective forms of traceable identification at time of registration and bad actors should have their service suspended as soon as credible suspicion is raised.
 
The security industry is already drawing valuable lessons from the levels of co-operation achieved among rivals during the fight against Conficker and hopefully this effective co-operation will continue and deepen. Initiatives must be financed on a national level to more effectively educate and inform citizens of the dangers posed by cybercrime and to encourage safer computing practices. Lastly the security industry must not rest on its laurels, we can take heart in past successes but we cannot rely on past technology alone. Innovation is the key to keeping up with and hopefully surpassing the techniques developed by the bad guys.

One thought on “The history of the botnet – Part III

  1. Pingback: Tweets that mention The history of the botnet – Part III » CounterMeasures -- Topsy.com

Leave a Reply

Your email address will not be published. Required fields are marked *

*