Friday 9 August 2013

Honeypots Lure Industrial Hackers Into the Open

Dummy water-plant control systems rapidly attracted attention from hackers who tinkered with their settings—suggesting it happens to real industrial systems, too. 
man sits in fron of bank of monitors 
Just 18 hours after security researcher Kyle Wilhoit connected two dummy industrial control systems and one real one to the Internet, someone began attacking one of them, and things soon got worse. Over the course of the experiment, conducted during December 2012, a series of sophisticated attacks were mounted on the “honeypots,” which Wilhoit set up to find out how often malicious hackers target industrial infrastructure.
Wilhoit’s findings provide some of the best evidence yet that people are actively looking for and attempting to take unauthorized control of the type of industrial systems that are used to control everything from energy plants to office HVAC systems. Recent years have seen U.S. politicians speak of and researchers demonstrate the vulnerability of such systems, and thousands are known to be connected to the Internet with weak or nonexistent controls against unauthorized access (see “What Happened When One Man Pinged the Whole Internet”).
“Everybody talks about [industrial control systems] being attacked, but no one has any data to back that up,” says Wilhoit, who works for the computer security company Trend Micro. “I know that my ICS honeypots have definitely been ‘owned,’ and I think there’s a reasonable likelihood that it has happened in the wild.”
Last year, the then-defense secretary Leon Panetta warned that successful attacks had been made on the control systems of U.S. electricity and water plants and transportation systems. But since then, little has been disclosed publicly about such incidents. A March newsletter from the Department of Homeland security’s Industrial Control Systems Cyber Emergency Response Team contains one of the few public disclosures of such an attack, saying that energy management systems at a factory and a state government building in New Jersey were compromised in 2012.
Wilhoit’s work suggests that this may be just the tip of the iceberg. He used three different honeypots, each of which was carefully designed so that an attacker would believe he or she had discovered a computer that controlled physical settings on an industrial system. One of the decoys offered administration Web pages for a water pump that acted like the real thing; another was a physical server installed with software commonly used to control physical industrial equipment; Wilhoit also bought a piece of hardware used to connect a computer to control industrial equipment and installed it in his basement so that it appeared to control the HVAC and lighting of a factory.
A total of 39 attacks were mounted on Wilhoit’s honeypots, some of which involved modifying the settings of the physical system they appeared to control. Attacks appeared to originate from computers in a variety of countries, with 35 percent from China, 19 percent from the U.S., and 12 percent from Laos. Attackers often appeared to use automated tools that search out industrial systems on the Internet before investigating more thoroughly.
The most striking attacks exploited bugs to change the settings of Wilhoit’s imaginary industrial systems. “They were doing things that would change the water pressure, or temperature, or stop the flow on the water pump,” says Wilhoit. “If it is happening to a honeypot, what is happening to real devices with no protections? It is apparent that there is some expertise out there.”
Because the attacks made use of techniques specific to industrial control systems, Wilhoit believes they were carried out by people intent on finding and messing with such systems. Some of the attacks involved sending e-mails to the administrator address he made available. Attachments to those e-mails hid previously unknown malicious software that Trend Micro is now investigating. “I can’t relate all the details,” he says. “It’s substantial as findings go.”
Wilhoit will say that the malware he received appeared to be designed to take over a commonly used controller for industrial control systems. Wilhoit and colleagues at Trend Micro are now operating more honeypots, setting them up in locations around the world to record a global picture of such activity. They are also working on new strategies that could help defend such systems.
Many relatively simple countermeasures do exist to protect industrial systems, but they are not routinely used, says Billy Rios, a security researcher who works on industrial control systems at security startup company Cylance, and who has disclosed hundreds of bugs in common industrial systems. Removing these systems from the publicly accessible Internet is the most crucial defensive measure, says Rios. However, even systems only accessible by a private connection could be reached by targeting a company employees for passwords, and once an attacker gains access to an industrial system there are often many security bugs for them to exploit. “The security of the industry in general is really poor,” says Rios.
Joel Young, chief technology officer of Digi International, which sells hardware used to connect industrial control systems to the Internet, says that the companies he sells to have traditionally thought of reliability and privacy as more important than security. Home energy management systems and smart meters, for example, have been carefully designed with features intended to guard privacy due to public concern about their leaking energy use data. “But if you look at putting a server out to monitor a substation, there’s just no security at all,” he says. “It’s like, who would want to hack into this?”
Rios says that companies that make industrial control systems have begun to pay more attention to security issues, but he’s still finding bugs in new products, and in many older products that remain in use. “These devices have a lifecycle of 20 to 30 years.”

Wall Of Sheep Hacker Group Exposes NFC's Risks At Def Con 2013

On the final day of Def Con 2013, I had the unique opportunity to interview the hackers behind the Wall of Sheep hacker group. The objective of Wall of Sheep is to spread awareness to computer users around the world about implementing measures to protect their personal data simply by using simple security measures when they connect to networks. The security of online data has never been more in the forefront of public opinion than the present. With the recent NSA incident, many in the U.S. are still wondering about the privacy of their personal data. People also may be thinking about what steps they can take in their everyday computing in order to protect themselves from attacks from malicious hackers while using web browsers and mobile phones.
I spoke directly with the awesome guys behind the Wall of Sheep hacker group, whose motto is, “Security Awareness For the Flock.” Their mission is that of the hacker. The good kind that is. It’s about showing the world what hacking does “outside the box” to liberate technology so it can perform outside the confines of it’s original purpose. We talked about the groundbreaking results of their NFC Security Awareness Project. Wall of Sheep showed me a demo of the NFC hack they have developed that exposes a major security risk for users of this smart technology. According to the Wall of Sheep  security experts I spoke with at Def Con 2013, the “potential risk comes from someone with malicious intent creating or replacing an existing NFC tag with infected content. Malicious intent can vary from collecting unauthorized information about the device to changing the device settings to delivering malicious software to the device for remote access.”
Michael Venables: How would you define “hacking” and what is it’s main objective?
Wall of Sheep: Hacking is taking anything that’s available to any individual who is looking to take that item, whether it’s technology, food, or whatever, and using it for a purpose for which it wasn’t intended. A lot of times, it’s making it better, it’s becoming some new form of innovation or it could be solving a problem that the item inherently had.
So, a really good example of hacking that’s happened since the 1900s is in the automotive industry. People would take a stock car theyd’ open it up, and they’d modify the motor and make it go faster and faster in racing. It’s not intended to do that, but they’d modified it to make it do something that wasn’t necessarily done. And that’s a form of innovation that generates new and better, faster technology. Thomas Edison, Benjamin Franklin — they’re all hackers.
Venables: Many people conflate the world of hacking into a technology movement based on malicious goals. How do you distinguish from the modification of existing systems and the guys who want to hack government agencies?
WOS: We’re about security awareness and finding ways of understanding the risks that are inherent inside of products inside the market and helping the average, everyday citizen understand how to protect themselves from that. We’re on the good side. There are certainly people with malicious intent to do bad things with technology and the information that they get.
And you can take any stock piece of equipment and turn it into something it wasn’t meant to do. Our purpose it to take something that may have an inherent flaw in it, discover what the flaw is and help educate the manufacturer about this flaw, so they can improve the product and make it better. To prevent the people who are doing malicious things from being able to continue to do that.
Venables: Tell me about Wall of Sheep and what it’s about.
WOS: Wall of Sheep was created over a decade ago, and it was a group of like-minded individuals looking at the different traffic going by.
Let me start by giving an analogy that we like to give. If you have two children sitting next to each other in a classroom. And they’re having a conversation. And the kid right behind the first kid, sitting there, breathing on his neck, listens and hears the converstion. There’s nothing illegal about that. It’s a public conversation. They’re talking out loud, and they overheard it. We do something very similar at Def Con and other conferrences. We listen to the network traffic. We’re just actively listening. We’re not hacking anything. We’re passively monitoring the traffic flows on the network. As we hear those conversations, we take that into something we can visually understand, we see a lot of people connecting to their services, such as email, FTP servers or any sort of technology, in an insecure way. And so what we do is we put them up on this wall to let them know that they just leaked their data out to the world at a hacking conference. So we offer a service. They come up to our area and they say, “Hey, I’m on the wall. Can you help me understand what I did wrong, what flaw and harden this so it doesn’t happen again. And we spend time with them, educating them, looking at the applications they’re using and we tell them about other applications besides using insecure mail and insecure FTP. There’s secure mail and secure FTP. And there is of course, VPN. We guide them and help them to secure themselves.
One of our things is that, if this is happening to the best of the best around the world at Def Con, what’s happening to the standard citizen who doesn’t understand technology. They’re just leaking their information out and they don’t know any better. So we’re hoping to raise security awareness so they know how to protect themselves.
Think about this: Def Con is known as the world’s most hostile network. It’s known. It’s a known thing all around the world. Everybody who comes to Def Con understands that Def Con has a huge number of hackers and that it’s a very dangerous place. Now why would you bring your laptop and check your email on the world’s most hostile network? It just doesn’t make sense. Well, if you don’t understand it. So what we’d like to do is to have people connect, use their email and use their services here or anywhere for that matter. You should be able to do business anywhere, if you put your proper security mechanisms in place.
And the thing that we also say, quite often is that, at this hacking conference there are people who are identified, self labelled or known to be hackers. And you’re worried about them, and you’re known to be a bit more on your guard because you’re here. But when you go home, what about the ones who are just out there doing bad things. You don’t know that they’re there. You should use the same protection there as you use here. You can’t just shut yourself off and turn off your machine. You need to continue doing things, just in a secure way.
The whole point of our project is really just security awareness. To help people get better educated on using secure protocols such as VPN. That’s really where our next project take us. it’s a new security project that we did. We worked together, partnering on this. We have these tags that are buttons. The buttons have NFC tags inside of them. We also put NFC posters all around the area. The NFC tag has a URL inside of it. All the ones that we put around here have something harmless in it. It says “Download music” or “Warning – this can be dangerous.” We have a lot of different things. We did some rickrolling just for fun. We had a good time with it.
What we have here in front of us is a demonstration of what actually can happen. About 70% of the phones on the market are old. People like to keep their smartphones. They can’t upgrade them. It costs a lot of money, so it makes sense. You wait out your contract and then you upgrade. So these phones that are older are susceptible to this tag that we’re going to show you. And what happens is we can actually have your phone touch an NFC tag, download a piece of malware without you knowing, without it prompting you and then we can take control of your phone’s SMS and get a clone of your SMS messages on ours.
Riverside had a tag that was benign. It can be a tag to download music, to get an e-book to look at a music schedule. And there are thousands of these large posters in airports that say, “Touch me to me to get some free music.” And they have these huge marketing campaigns to get people to use NFC to get discounts at local stores. It’s an emerging marketing thing right now. If you contact any marketing firm, the biggest and the hottest thing that their trying to get their clients to do is to start using NFC — interactive marketing, or active media.
So when I click okay, it’s going to download it. Basically, it’s saying, “Would you like to install this virus?” And you say, “OK” So I just touch it to install it. And this is what’s really important. It warns the user what services and resources on the device it’s going to access. [Really long list of services shown] Now if it’s just a flashlight application, it shouldn’t have this much access. It should just be able to access at most one or two things. It definitely should never be able to access networking, GPS, your address book or anything like that. And we’ve seen malware that does that in the field, in the wild. This is warning us of what it can access. So, we say, “Install” and it’s done.
As you see when opening this application it looks like Android Security Suite. I’m trying to be proactive and protect myself. I got security awareness from the Wall of Sheep. I’m installing AV (antivirus) on my phone, but this actually happens to be malware. Unfortunately, for the victim it’s a fake security suite. A common thing used on PC’s that millions of people fall for all the time. It gives you an activation code which is unique, so in case my friend grabs a copy they don’t become suspicious. If it kept showing the same code, people wouldn’t install it, so a little trickery is involved. Now I can just say, let me go home — my life is good. But, I’m pretending to be a bad guy. That [malware] got installed on the phone. (CedoxX sends a text message to the test phone). At first, nothing’s going to happen. But, when it receives the first message from anyone in the world, it’s going to go to a website that the bad guy controls. When it connects to the website as the bad guy, I’m monitoring that website. And I see that this smartphone number is infected. (Test phone receives new text message at this point). New message arrives. (CedoxX retrieves a new message on the test phone). Now that it has connected, the bad guy can control the phone.

Rick Roll’d!!!

As you can see, the bad guy would now know the phone number. I’ll just redirect them to CedoxX’s test phone. (Heal sends a command [slash + phone number] to test phone). This phone (test phone) receives it, then on his phone he’s going to get information about that [test phone]. He just received all of these different pieces of information from the phone: make, model, device ID, malware version, and Android version. Sent, just like that. From the user’s perspective, all they would see is a slash and the phone number. And they’re going to go, “Oh, it’s a typo or it’s SMS spam.” Or they think they accidentally pushed some buttons on their phone.
Screenshot of Android Security Suite-disguised malware accessing a mobile device. Image courtesy Wall of Sheep
Once a smartphone is infected, the criminals on the other end will have the power to send commands to the infected phone, and all SMS activity sent to the infected phone will be monitored on another device. (Heal sends CedoxX’s test phone another text message. “The password is: BeSecure”). Monitoring phone displays exact message, marked with a double chevron – indicating that it’s a cloned message. (>> The password is: BeSecure) The process is called “SMS interception and redirection.” And Riverside went on to say “We are not trying to discourage people from using NFC, we are promoting security awareness by encouraging them to use caution when scanning NFC tags they don’t control.” 
EXCLUSIVE POSTER CODE – FREE MUSIC!!!
End of the transcript of the NFC hack demoed for me by Wall of Sheep at Def Con.
WOS: Going back to the first question you asked, “What’s a hacker?” As security features, we actually hacked the malware, and changed it so anything that would go to an unknown third party has been removed and only goes to us. So in our control. So that’s a perfect definition of a hacker.
Venables: But in doing so, you’re proving that NFC has some build-in weaknesses in the design of the technology.
WOS: It has some risks, just like any technology. And if you understand the risks, you can choose to continue using it or not. Just like anything in life. Like a car, you have a risk of having an accident. Do you choose to get in the car and drive it. That’s up to you. Or, if you know the risks, you can make the decision.
We’re not saying that NFC is bad, and we’re not discouraging use. If the device, if the tag is in your control, and you know where it’s been, then it’s fine. And there are a lot of people who do that now, and it’s very safe.
Venables: How can consumers best protect the security of the data that they are sending out?


WOS: The best advice is that you can’t trust anybody and anything between you and the endpoint that you want to communicate with. A lot of people at Def Con will connect to the wireless network or the secure wireless network. That secure wireless network is hosted by a group of volunteer hackers, so you don’t know what they’re doing or listening to on the backend. And then it goes to AT&T or some ISP, and you don’t know what they’re doing or listening to. And it could bounce through a number of other places. The NOC here that’s running the secure wireless depends on services from the hotel. They connect to the hotel’s network that connects up to an ISP. There could be a lot of stuff in between the NOC that’s supporting the secure wireless before it even leaves this building. There could be taps all along that way.
The best and most simple way to do it is to use a form of virtual private network, a VPN. You connect to a VPN — it encrypts your connection from where you are to a point. And then you need to ensure that you then go from your machine to wherever you want to go through, encrypted. If you connect to the email server, you use encryption. If you are connecting to anything, you should use encryption from your device to the endpoint.
Do research on the kind of VPN company you are considering. Find out if they keep server logs. Whether or not they have multiple POPs, that they are sending the traffic out and are randomizing it, so it’s not easy for someone to identify your traffic. The other thing is that cell phones now have the capability to set up VPN, so users should be setting up VPN on their phones as well.
Practice endpoint protection as well. If you are going to install something, don’t just click through whatever is prompting you. Take the time to read it. And then keep on high alert, and ask yourself, “What are the implications if I install this? How can I remove it? Also, especially on Android, there is Endpoint Antivirus that will check to see if it is malware, and even have privacy advisors, in case they just blindly go through it, it will notify them later on that this application is trying to do something it shouldn’t. Better awareness and better protection.
Here is the cautionary tale from the Wall of Sheep security experts: Caveat usor. User beware. Practice the user awareness of the informed, cautious technology consumer. If you do receive a text message that has a slash and a phone number on your phone, beware. Your phone might be infected with malware. Until this NFC flaw is changed, this is an ongoing risk that all cell phone users should be wary of. There’s one last point. Just be glad that these guys are looking out for the protection of your data.
Special thanks to the guys of Wall of Sheep for accommodating my last-minute request for this interview and for demoing the steps of their awesome NFC hack at Def Con 2013.
Wall of Sheep is owned and operated by Aries Security and an army of
volunteers.

Tools To Hack Android Phones Are Getting Easier To Use

Security researchers have long maintained that malware is a problem on Android, the Google operating system that’s on 80% of the world’s smartphones. In extreme cases, hackers with malicious intent can do more than send premium text messages – they can turn a phone into a spying tool too. The scenario was recently demonstrated at hacker conference Black Hat, and in one real-life incident, an unnamed company executive unwittingly became a conduit to short-sellers who were listening in on a board meeting he attended — all possible thanks to the smartphone in his pocket.
The crackers had set up a false, rogue cell tower in the near vicinity, and surreptitiously turned on his device’s mic once the company meeting was underway. Not long after, an organization shorted the stock of his firm and netted themselves $30 million. The incident took place in the last year, according to Gregg Smith, the CEO of mobile security company KoolSpan, and is by no means an isolated case. In fact, researchers say it’s becoming easier to to take control of certain Android device features, like the mic or camera, with free online tools that are becoming more user friendly.
Security research firm Symantec SYMC +0.3% recently highlighted a remote access tool (or RAT) known as AndroRAT being exchanged in underground forums, which together with a new tool called a binder, allow attackers to scrape personal information from an Android phone.
AndroRAT can retrieve a phone’s call logs, monitor SMS messages and calls, take photos and make a call. Once a would-be cracker has downloaded the remote access tool, they can use the binder to package AndroRAT into a legitimate-looking app, such as a game like Angry Birds. The binder costs $37 to buy online, while AndroRAT is free and open source.
AndroRAT was first discovered in November 2012, but the binder has made its appearance more recently, and is key to making it possible for people without programing skills to infect an Android phone with the malicious tool.
Once they’ve done so, they only have to upload their infected app to a third-party site and wait for others to download it. Symantec analyst Vikram Thakur estimates that roughly 50% of Android apps downloaded globally have come from third-party sites, and the practice is common in China, where the government has banned access to the official Google Play store.
Attackers will typically infect a copy of a paid-for gaming app, and advertise it as being for free to entice more downloads. “[The victim is] playing the game,” says Thakur, “and the Trojan is doing its deed in the background.”
Sometimes attackers will just want to steal contact information, which depending on its origins can be highly prized in underground markets. Other times they’ll want the hijacked phone to send premium SMS’s. In the latter case, victims can remain oblivious until they see the extra digits on their monthly bill — Trojaned apps can also intercept warnings messages from carriers and delete them.
Thakur estimates that thousands of people across the world have downloaded apps that have been infected with AndroRAT, though he believes security services and Internet Service Providers will step up efforts to detect the intrusion.
This simplification of mobile hacking tools will come as no surprise to experts in the security industry, who have already seen wannabe crackers use automated attack tools like sqlmap or  Havij to carry out relatively simple, SQL injection attacks to steal customer data from websites. The notorious hacking group LulzSec revealed it had used Havij to steal passwords and email addresses from PBS in summer 2011, and it also may have been used by the hacker group Cabin Cr3w to breach a Utah police database in 2012.
Darren Martyn, a former member of LulzSec who is now working in information security, says there are parallels between the way accessible tools like Havij, LOIC (an even easier tool used for taking part in DDoS attacks) and the AndroRAT binder have lowered the bar for second-rate cyber criminals without programming knowledge to subvert web applications and now, Android devices.
“It’s an emerging problem,” he said. “Even the script kiddies have it now… More irresponsible 14-year-olds with automated attack tools is a terrifying prospect, and that’s ignoring the obvious industrial espionage and ‘real crime’ potential.”
Georgia Weidman, a smartphone penetration tester who led training sessions at the Black Hat conference in Las Vegas, said it was becoming easier to exploit mobile devices thanks to tools like AndroRAT. For now, cyber criminals can still make more money from attacking traditional PCs because there are simply more machines that run Java, a programming language widely-thought to have security vulnerabilities, in the browser. “That is rapidly changing though,” she said. “More malicious apps are showing up in app stores.”
Weidman herself created a tool for back-dooring Android apps, called SPF, which was designed to test app security. Similar to AndroRAT, it allowed her to decompile an app, and add new functionality such as scraping contact data, before repackaging it to look as it did before.
Such is the paradoxical world of cyber security, though, that tools like Weidman’s often end up being subverted to carry out real attacks. Weidman says she was recently approached by the government of a developing country and asked if she could create a similar tool like SPF, allowing that government to inject a popular app with software that would let it snoop on its citizens. Weidman wouldn’t name the government, but said representatives had offered her “a couple million dollars” for what would have been roughly two months work, and claimed they wanted to use the tool to identify sex traffickers and drug lords. She declined.

“It’s not any harder to exploit a mobile device,” Weidman said. “The easiest way to get on a traditional computer is to somehow trick a user into downloading something, or open a link in their browser. It’s the same thing in mobile.”

It doesn’t help that many consumers will blithely download whatever apps they find interesting. Some 56 billion apps are expected to be downloaded globally by the end of 2013, according to ABI Research, bringing developers $20-25 billion in revenue, and who knows how much else to cyber criminals.

Google could not be reached for comment on AndroRAT, but the company’s latest blog post highlights three steps for protecting an Android device; one of them is to let Google scan for malicious apps when the phone prompts a user to do so. Another is to set up a lock screen.

Symantec’s Thakur says the steps to keeping an Android phone secure are pretty straightforward, and users should primarily be mindful of where they download their apps from. Crucially, any downloaded app will have to ask a user for permission to access features like the contacts book or GPS data.

“Make sure the app, when installing, is only requesting permissions on the phone for what it intends to do,” he says. “If the calculator is asking to read your e-mail, there’s probably something wrong there.”

Letting companies strike back at computer hackers is a bad idea - A byte for a byte


SECURITY experts like to say that there are now two types of company: those which know they have been hacked and those which have been hacked without realising it. An annual study of 56 large American firms found that they suffered 102 successful cyber-attacks a week between them in 2012, a 42% rise on the year before. Rising numbers of online attacks are stoking a debate about how best to combat cyber-crooks. One emerging school of thought holds that companies should be allowed to defend themselves more aggressively by “hacking back”—using hacker-like techniques to recover stolen intellectual property and frustrate their assailants.
The discussion has been sparked by the rise of a new generation of hacker, either working for criminal groups or with close links to the state in places such as China. Advocates of hacking back argue that the usual digital defences are no match for these attackers. Instead, firms need to go on the offensive, using everything from spyware that monitors suspected hackers’ activities to software that retrieves or deletes pilfered property (see article). If an aerospace firm spots the blueprints for its next plane flying off its database and into the computers of a foreign rival, it should be able to give chase.
The concept of hacking back has some prominent supporters, notably in America. In May a private commission on intellectual-property theft, whose members include Jon Huntsman, a former ambassador to China, and Dennis Blair, a former director of national intelligence, gave its support to technology that helps firms track stolen files and then reclaim them or prevent their use without damaging other networks. Another idea, floated more recently, is for governments to license private firms to hunt down and deal with hackers on businesses’ behalf. But encouraging digital vigilantes will only make the mayhem worse.
Hackers like to cover their tracks by routing attacks through other people’s computers, without the owners’ knowledge. That raises the alarming prospect of collateral damage to an innocent bystander’s systems: imagine the possible consequences if the unwitting host of a battle between hackers and counter-hackers were a hospital’s computer.
Endorsing the idea of hacking back would also undermine current diplomatic efforts to get China and Russia to rein in their hordes of unofficial hackers. America has been a cheerleader for an international convention on cyber-crime that prohibits private actors from striking out online. Letting American companies, or their hired guns, retaliate against hackers would undermine that effort.
Governments can still help firms battle cyber-criminals. They can spend more investigating online attacks on firms. Many are already on recruiting drives for digital sleuths. They should also share more intelligence on cyber-threats. Companies say the advice they receive is often too vague, perhaps because spooks do not want to reveal their sources. And greater clarity is needed about exactly what digital tools can be used to combat hackers. The American Bar Association says it plans to release a report on this issue in the autumn.
More intel inside
Companies should also take a long, hard look at themselves. The hackers may be getting more sophisticated, but the methods they use to get their hands on corporate secrets are often absurdly simple. A report released this year by Verizon, a telecoms firm, found that over three-quarters of network intrusions at companies were the result of weak or stolen user names and passwords. Instead of tooling up to fight the hackers, firms should focus on plugging the holes that let them in.

New cell phone case hides you from location trackers

A US technologist has designed a phone case that shields your mobile's cellular, Wi-Fi, and GPS signals, keeping your location from being tracked.

New York-based artist and technologist Adam Harvey, has just launched a Kickstarter programme to develop the signal-blocking phone case called Off Pocket.

Harvey also made headlines in January for his line of stealth clothing designed to hide wearers from the spying eyes of drones, 'Discovery News' reported.

According to PopSci, the case is based on the technology behind the electric field-blocking Faraday cage, which protects electrical equipment from lightning strikes.

Like the cage, the Off Pocket contains a metal fibre mesh that blocks the wireless signals (frequencies between 800MHz and 2.4 GHz) coming from cell phone towers, bluetooth and Global Positioning System (GPS) satellites as they attempt to communicate with the user's mobile.

The Off Pocket is waterproof and 100 times stronger than conventional signal blocking bags used in law enforcement, the report said.