Recent events, however, have demonstrated that the impact of DDoS attacks is much more than meets the eye. Not only can these attacks inflict huge economic losses, they can also have a serious impact on the reputation and image of the victimized company or organization.
Another worrying trend observed in recent DDoS attacks is that in addition to targeting web infrastructures, attackers are also trying to exploit flaws and improper configurations within the Domain Name System (DNS) infrastructures. Arbor Networks’ 2012 Worldwide Infrastructure Security Report indicated that 41% of respondents experienced DDoS attacks against their DNS infrastructure.
Moreover, the targets of a DDoS attack do not fit into a specific category. Providers of online banking, payment services, email services and just about every other type of web service provider are prime candidates.
Similarly, there is no typical profile of an attacker – cyber criminals, hacktivists and state-sponsored hackers all use similar tactics to hit a large list of targets.
Principal Categories of DDoS Attacks
The security community classifies DDoS attacks as follows:
- Volume Based Attacks –The attacker tries to saturate the bandwidth of the target’s website by flooding it with a huge quantity of data. This category includes ICMP floods, UDP floods and other spoofed-packet floods. This type of attack is very common and simple to execute using the vast quantity of free tools available on the Internet, and, as such, is very popular in the hacktivist underground. The magnitude of Volume Based Attacks is measured in bits per second (Bps).
- Protocol Attacks –The attacker’s goal is to saturate the target’s server resources or those of intermediate communication equipment (e.g., Load balancers) by exploiting network protocol flaws. This category includes SYN floods, Ping of Death, fragmented packet attacks, Smurf DDoS and more. The magnitude of Protocol Attacks is measured in Packets per second.
- Application Layer (Layer 7) Attacks – Designed to exhaust the resource limits of Web services, application layer attacks target specific web applications, flooding them with a huge quantity of HTTP requests that saturate a target’s resources. Application layer attacks are hard to detect because they don’t necessarily involve large volumes of traffic and require fewer network connections than other types of DDoS techniques. Examples of application layer DDoS attacks include Slowloris, as well as DDoS attacks that target Apache, Windows, or OpenBSD vulnerabilities. The magnitude of application layer attacks is measured in Requests per second.
The increase in the magnitude and complexity of DDoS attacks highlights the need for organizations to adopt proper countermeasures and mitigation techniques. Naturally, time is of the essence when it comes to DDoS protection. Prompt DDoS detection is a critical phase of the mitigation process – the faster security systems can detect a potential threat, the better the chance of minimizing damage and even neutralizing the threat.
Firms that provide solutions for DDoS mitigation follow various approaches to protect their customers. The first step in protecting a company’s web infrastructure against a DDoS attack is to identify normal conditions for network traffic. This definition of normal “traffic patterns” is necessary baseline for threat detection and alerting. The majority of commercial solutions provides threshold-based alerting mechanisms that trigger alerts based on the collection of meaningful information from the logs.
Another common detection approach is known as “Layered Filtering,”, dedicated appliances and software detect and mitigates different types of attacks in both the network and application layers. Defense mechanisms which analyze traffic in layers try to detect harmful traffic and apply filters to block the threats at the specific level. Many companies also adopt open source software to limit the incoming number of connections and traffic dimensions.
Traditional DDoS mitigation solutions oversize the network bandwidth and adopt complex hardware such as firewalls and load balancers. Many experts consider this approach to be unnecessarily costly and in many cases ineffective. For this reason, many companies have chosen to adopt a cloud-based approach to DDoS protection with direct management of DNS services, enabling them to optimize their response to malicious events. Another advantage of a cloud-based approach is the reduction of investment in equipment and infrastructure (capex) as well as the reduced cost of managing and maintaining typical hardware solutions (opex).
Key Criteria for Evaluating DDoS Mitigation Solutions
Choosing a DDoS mitigation solution is far from a simple task, given the numerous alternatives and choices, such as hardware versus software, appliance versus cloud-based solutions, etc. To simplify your decision process, the following checklist includes the most important features/criteria to evaluate before acquiring a new product:
- Capacity of solution in term of protocols supported, analysis path implemented and granularity offered for traffic inspection.
- Support for traffic profiling. Companies offering a variety of services may wish to define a different policy for each service. Normal traffic patterns for various services could be substantially different. For example, analyzing a banking website the traffic related to the users that simply visit the portal must be differentiated from the one related to banking customers that access to home banking functions.
- Product flexibility – the possibility to create ad hoc policies and patterns starting from well-known configurations.
- Product scalability – the product should be able to evolve and scale with the changing needs of the buyer.
- Availability of built-in hardware redundancy features
- Availability of an efficient reporting/alerting system. Various solutions provide very different levels of reports and alerts – these features should be evaluated with care.
- Reliability – DDoS is a dynamic threat that morphs over time. Be sure to choose a solution provider that is able to provide continuous updates and prompt support for its products.
- Bidirectional traffic monitoring – It is important to control both inbound and outbound traffic to prevent the abuse of network resources by attackers.
- Product reputation and customer references. This is a crucial aspect that must take into account the features of product and maintenance services.
As noted earlier, one of most popular mitigation approaches is cloud-based DDoS mitigation. Such solutions are offered by Incapsula, Prolexic and Verisign, among others. Successful mitigation depends on the ability to monitor and analyze traffic patterns in real time. When a DDoS attack is detected by monitoring systems, the malicious traffic is redirected from the targeted website to a mitigation architecture through the cloud. Inbound malicious traffic is sent to the nearest scrubbing center, where the mitigation solution applies DDoS filtering and routing techniques to reduce DDoS traffic interference. The clean traffic is then routed back to the customer’s network. Accordingly, the capacity of the scrubbing centers and the filtering methods used are crucial for the provisioning of an efficient DDoS mitigation service.
To get an industry expert’s take on these topics, I contacted Incapsula, one of the leading providers of DDoS mitigation services. Incapsula offers Web Security, DDoS Protection, Failover & Load Balancing on a Global CDN. The company was spun out of and is financially backed by Imperva [IMPV], a leading provider of data security solutions. Here are excerpts from my interview with Incapsula’s CEO, Gur Shatz.
What are the key criteria for a successful DDoS mitigation service?
“Well, there are various factors that contribute to a successful DDoS mitigation solution, such as:
- Network size: You need a mitigation service that can handle the largest possible attack that could come your way. Since attacks are becoming larger at a disturbing rate, anything below 250Gbps of network capacity just isn’t enough.
- Automatic detection: There are many ways to launch a DDoS attack, and sometimes the nature of the attack rather than its size is what makes mitigation so hard. Take, for example, hit and run attacks which are short bursts of traffic in random intervals over a long period of time. A manual mitigation solution that requires users to turn it on and off on every burst will throw the IT team into complete havoc. Some solutions, like Incapsula, offer automatic DDoS mitigation and take full responsibility for both detection and mitigation of the attack.
- Transparent mitigation: DDoS is about degradation of service. While this can be complete denial of service, it can also be disruptions. If your DDoS mitigation service introduces a large rate of false positives or degrades the normal user experience in any way, the DDoS attack is actually achieving its goal – even if your service is still up and running. Unless your mitigation service can offer zero disruption to the normal user experience, you will not be able to withstand lengthy attacks without damaging business performance.
- Time and complexity to onboard: A key factor in a DDoS service is the time it takes to on-board the service. There are various techniques and setups – the more complex ones require on‑premise devices and configuration, while the faster ones require only a simple DNS change. When you are under fire, you’ll appreciate having chosen a solution that can shield your network from that attack with minimal time and effort.
- Support: A 24×7 team of experts is an essential part of a reliable DDoS mitigation service. Being under a DDoS attack is one of the most frustrating situations for any IT manager. You have practically no visibility into what is happening, there is nothing you can do internally and your entire service is down. You need an expert by your side who can help you understand what is going on during the attack and get you through it as painlessly as possible.”
“The principal trends that we are observing are:
- Larger and larger network attacks. These large-scale attacks are often using SYN flood and DNS amplifications as their tool of choice.
- Hit and Run attacks. These are smaller scale application layer attacks that don’t last very long, but occur every few days.
What are the strong points of your Cloud-based solution?
“I believe that our true strengths lie in a number of aspects of our service:
- We offer a cloud based service that can be activated without any additional hardware, software or other integration requirements. Adding a website to Incapsula is done through a simple DNS change which allows us to offer our services to practically anyone regardless of company size, IT manpower or expertise.
- A large network of more than 300Gbps that can handle practically any attack out there.
- Transparent and automatic mitigation of attacks with no negative (and in most cases positive) effect on legitimate users’ experience.
- Having a built-in CDN and Web Application Firewall allow our customers to always be online and automatically mitigate attacks while improving overall user experience and overall security.”
No comments:
Post a Comment