cover photo

Seth Martin

seth@lastauth.com


Seth Martin
  
DeeplinksDeeplinks wrote the following post Thu, 29 Dec 2016 18:10:08 -0600

Secure Messaging Takes Some Steps Forward, Some Steps Back: 2016 In Review

This year has been full of developments in messaging platforms that employ encryption to protect users. 2016 saw an increase in the level of security for some major messaging services, bringing end-to-end encryption to over a billion people. Unfortunately, we’ve also seen major platforms making poor decisions for users and potentially undermining the strong cryptography built into their apps.

WhatsApp makes big improvements, but concerning privacy changes
In late March, the Facebook-owned messaging service WhatsApp introduced end-to-end encryption for its over 1 billion monthly active users.  The enormous significance of rolling out strong encryption to such a large user-base was combined with the fact that underlying Whatsapp’s new feature was the Signal Protocol, a well-regarded and independently reviewed encryption protocol. WhatsApp was not only protecting users’ chats, but also doing so with one of the best end-to-end encrypted messaging protocols out there. At the time, we praised WhatsApp and created a guide for both iOS and Android on how you could protect your communications using it.

In August, however, we were alarmed to see WhatsApp establish data-sharing practices that signaled a shift in its attitude toward user privacy. In its first privacy policy change since 2012, WhatsApp laid the groundwork for expanded data-sharing with its parent company, Facebook. This change allows Facebook access to several pieces of users’ WhatsApp information, including WhatsApp phone number, contact list, and usage data (e.g. when a user last used WhatsApp, what device it was used it on, and what OS it was run on). This new data-sharing compounded our previous concerns about some of WhatsApp’s non-privacy-friendly default settings.

Signal takes steps forward
Meanwhile, the well-regarded end-to-end encryption app Signal, for which the Signal Protocol was created, has grown its user-base and introduced new features.  Available for iOS and Android (as well as desktop if you have either of the previous two), Signal recently introduced disappearing messages to its platform.  With this, users can be assured that after a chosen amount of time, messages will be deleted from both their own and their contact’s devices.

Signal also recently changed the way users verify their communications, introducing the concept of “safety numbers” to authenticate conversations and verify the long-lived keys of contacts in a more streamlined way.

Mixed-mode messaging
2016  reminded us that it’s not as black-and-white as secure messaging apps vs. not-secure ones. This year we saw several existing players in the messaging space add end-to-end encrypted options to their platforms. Facebook Messenger added “secret” messaging, and Google released Allo Messenger with “incognito” mode. These end-to-end encrypted options co-exist on the apps with a default option that is only encrypted in transit.

Unfortunately, this “mixed mode” design may do more harm than good by teaching users the wrong lessons about encryption. Branding end-to-end encryption as “secret,” “incognito,” or “private” may encourage users to use end-to-end encryption only when they are doing something shady or embarrassing. And if end-to-end encryption is a feature that you only use when you want to hide or protect something, then the simple act of using it functions as a red flag for valuable, sensitive information. Instead, encryption should be an automatic, straightforward, easy-to-use status quo to protect all communications.

Further, mixing end-to-end encrypted modes with less sensitive defaults has been demonstrated to result in users making mistakes and inadvertently sending sensitive messages without end-to-end encryption.

In contrast, the end-to-end encrypted “letter sealing” that LINE expanded this year is enabled by default. Since first introducing it for 1-on-1 chats in 2015, LINE has made end-to-end encryption the default and progressively expanded the feature to group chats and 1-on-1 calls. Users can still send messages on LINE without end-to-end encryption by changing security settings, but the company recommends leaving the default “letter sealing” enabled at all times. This kind of default design makes it easier for users to communicate with encryption from the get-go, and much more difficult for them to make dangerous mistakes.

The dangers of unsecure messaging
In stark contrast to the above-mentioned secure messaging apps, a November report from Citizen Lab exposes China’s WeChat messenger’s practice of performing selective censorship on its over 806 million monthly active users.  When a user registers with a Chinese phone number, WeChat will censor content critical of the regime no matter where that user is. The censorship effectively “follows them around,” even if the user switches to an international phone number or leaves China to travel abroad. Effectively, WeChat users may be under the control of China’s censorship regime no matter where they go.

Compared to the secure messaging practices EFF advocates for, WeChat represents the other end of the messaging spectrum, employing algorithms to control and limit access rather than using privacy-enhancing technologies to allow communication. This is an urgent reminder of how users can be put in danger when their communications are available to platform providers and governments, and why it is so important to continue promoting privacy-enhancing technologies and secure messaging.

This article is part of our Year In Review series. Read other articles about the fight for digital rights in 2016.

Like what you're reading? Support digital freedom defense today!
Image/photo

Share this: Image/photo Image/photo Image/photo Image/photo Join EFF


#Encryption #Privacy #Communications #Messaging #Security #WhatsApp #Signal #LINE #Allo #incognito  
@Gadget Guru+ @LibertyPod+
Mike Macgirvin
  
I tend to disagree about mixed mode messaging. We need a range of communication tools, from hush-hush ultra top secret to public and open. Both ends of the spectrum have problems. That's why you need privacy.
Seth Martin
  last edited: Mon, 02 Jan 2017 10:46:52 -0600  
I agree with you, Mike. I just think it's important for these messaging apps to have encryption on by default to curb authorities targeting those that use the feature selectively.
Fabián Bonetti
 
Mike por que debo salir de mi serviddor para responderte?
Seth Martin
  
This bill would make several pieces of software that I use, Illegal. Even the software running the website you're viewing right now would be illegal.

'Leaked' Burr-Feinstein Encryption Bill Is a Threat to American Privacy

Image/photo


Every service, person, human rights worker, protester, reporter, company—the list goes on—will be easier to spy on.


#Privacy #Surveillance #Encryption #Freedom #Liberty @LibertyPod+ @Gadget Guru+
Mike Macgirvin
  
I don't know that it has been physically introduced, by Feinstein is pretty tenacious. She'll keep refinng it and wait for some high profile event to act as a trigger/catalyst and then ram it through in the aftermath when the opposition is on the defensive. I suspect she also leaks her own bills. That way all the furor erupts way ahead of schedule and by the time the bill is actually introduced, nobody can mobilise opposition because by then it's old news.
Marshall Sutherland
  
Here is my answer, I guess... They are still beating the drum for this abomination.

The Lawmakers Who Control Your Digital Future Are Clueless About Technology

It is becoming increasingly clear that Senators Dianne Feinstein and Richard Burr, co-chairs of the Senate Intelligence Committee, don’t have the slightest clue about how encryption works. Good thing they’re currently pushing disastrous legislation that would force tech companies to decrypt things for law enforcement!

Today Feinstein and Burr co-authored an op-ed in the Wall Street Journal entitled “Encryption Without Tears,” and wow, it is bad. They have yet again demonstrated a failure to grasp even the most basic principles of technology.

Seth Martin
  last edited: Sun, 03 Jan 2016 10:27:02 -0600  
The InterceptThe Intercept wrote the following post Mon, 28 Dec 2015 08:57:30 -0600

Recently Bought a Windows Computer? Microsoft Probably Has Your Encryption Key

Image/photo



One of the excellent features of new Windows devices is that disk encryption is built-in and turned on by default, protecting your data in case your device is lost or stolen. But what is less well-known is that, if you are like most users and login to Windows 10 using your Microsoft account, your computer automatically uploaded a copy of your recovery key – which can be used to unlock your encrypted disk – to Microsoft’s servers, probably without your knowledge and without an option to opt-out.

During the “crypto wars” of the nineties, the National Security Agency developed an encryption backdoor technology – endorsed and promoted by the Clinton administration – called the Clipper chip, which they hoped telecom companies would use to sell backdoored crypto phones. Essentially, every phone with a Clipper chip would come with an encryption key, but the government would also get a copy of that key – this is known as key escrow – with the promise to only use it in response to a valid warrant. But due to public outcry and the availability of encryption tools like PGP, which the government didn’t control, the Clipper chip program ceased to be relevant by 1996. (Today, most phone calls still aren’t encrypted. You can use the free, open source, backdoorless Signal app to make encrypted calls.)

The fact that new Windows devices require users to backup their recovery key on Microsoft’s servers is remarkably similar to a key escrow system, but with an important difference. Users can choose to delete recovery keys from their Microsoft accounts (you can skip to the bottom of this article to learn how) – something that people never had the option to do with the Clipper chip system. But they can only delete it after they’ve already uploaded it to the cloud.

“The gold standard in disk encryption is end-to-end encryption, where only you can unlock your disk. This is what most companies use, and it seems to work well,” says Matthew Green, professor of cryptography at Johns Hopkins University. “There are certainly cases where it’s helpful to have a backup of your key or password. In those cases you might opt in to have a company store that information. But handing your keys to a company like Microsoft fundamentally changes the security properties of a disk encryption system.”

As soon as your recovery key leaves your computer, you have no way of knowing its fate. A hacker could have already hacked your Microsoft account and can make a copy of your recovery key before you have time to delete it. Or Microsoft itself could get hacked, or could have hired a rogue employee with access to user data. Or a law enforcement or spy agency could send Microsoft a request for all data in your account, which would legally compel them to hand over your recovery key, which they could do even if the first thing you do after setting up your computer is delete it.

As Green puts it, “Your computer is now only as secure as that database of keys held my Microsoft, which means it may be vulnerable to hackers, foreign governments, and people who can extort Microsoft employees.”

Of course, keeping a backup of your recovery key in your Microsoft account is genuinely useful for probably the majority of Windows users, which is why Microsoft designed the encryption scheme, known as “device encryption,” this way. If something goes wrong and your encrypted Windows computer breaks, you’re going to need this recovery key to gain access to any of your files. Microsoft would rather give their customers crippled disk encryption than risk their data.

“When a device goes into recovery mode and the user doesn’t have access to the recovery key the data on the drive will become permanently inaccessible. Based on the possibility of this outcome and a broad survey of customer feedback we chose to automatically backup the user recovery key,” a Microsoft spokesperson told me. “The recovery key requires physical access to the user device and is not useful without it.”

After you finish setting up your Windows computer, you can login to your Microsoft account and delete the recovery key. Is this secure enough? “If Microsoft doesn’t keep backups, maybe,” says Green. “But it’s hard to guarantee that. And for people who aren’t aware of the risk, opt-out seems risky.”

This policy is in stark contract to Microsoft’s major competitor, Apple. New Macs also ship with built-in and default disk encryption: a technology known as FileVault. Like Microsoft, Apple lets you store a backup of your recovery key in your iCloud account. But in Apple’s case, it’s an option. When you set up a Mac for the first time, you can uncheck a box if you don’t want to send your key to Apple’s servers.

This policy is also in contrast to Microsoft’s premium disk encryption product called BitLocker, which isn’t the same thing as what Microsoft refers to as device encryption. When you turn on BitLocker you’re forced to make a backup of your recovery key, but you get three options: Save it in your Microsoft account, save it to a USB stick, or print it.

To fully understand the different disk encryption features that Windows offers, you need to know some Microsoft jargon. Windows comes in different editions: Home (the cheapest), Pro, and Enterprise (more expensive). Windows Home includes device encryption, which started to become available during Windows 8, and requires your computer to have a tamper-resistant chip that stores encryption keys, something all new PCs come with. Pro and Enterprise both include device encryption, and they also include BitLocker, which started to become available during Windows Vista, but only for the premium editions. Under the hood, device encryption and BitLocker are the same thing. The difference is there’s only one way to use device encryption, but BitLocker is configurable.

If you’re using a recent version of Windows, and your computer has the encryption chip, and if you have a Microsoft account, your disk will automatically get encrypted, and your recovery key will get sent to Microsoft. If you login to Windows using your company’s or university’s Windows domain, then your recovery key will get sent to a server controlled by your company or university instead of Microsoft – but still, you can’t prevent device encryption from sending your recovery key at all. If you choose to not use a Microsoft or a domain account at all and instead create a “local only” account, then you don’t get disk encryption.

BitLocker, on the other hand, gives you more control. When you turn on BitLocker you get the choice to store your recovery key locally, among other options. But if you buy a new Windows device, even if it supports BitLocker, you’ll be using device encryption when you first set it up, and you’ll automatically send your recovery key to Microsoft.

In short, there is no way to prevent a new Windows device from uploading your recovery key the first time you login to your Microsoft account, even if you have a Pro or Enterprise edition of Windows. And this is worse than just Microsoft choosing an insecure default option. Windows Home users don’t get the choice to not upload their recovery key at all. And while Windows Pro and Enterprise users do get the choice (because they can use BitLocker), they can’t exercise that choice until after they’ve already uploaded their recovery key to Microsoft’s servers.

How to delete your recovery key from your Microsoft account
Go to this website and login to your Microsoft account – this will be the same username and password that you use to login to your Windows device. Once you’re in, it will show you a list of recovery keys backed up to your account.

If any of your Windows devices are listed, this means that Microsoft, or anyone that manages to access data in your Microsoft account, is technically able to unlock your encrypted disk, without your consent, as long as they physically have your computer. You can go ahead and delete your recovery key on this page – but you may want to back it up locally first, for example by writing it down on a piece of paper that you keep somewhere safe.

If you don’t see any recovery keys, then you either don’t have an encrypted disk, or Microsoft doesn’t have a copy of your recovery key. This might be the case if you’re using BitLocker and didn’t upload your recovery key when you first turned it on.

When you delete your recovery key from your account on this website, Microsoft promises that it gets deleted immediately, and that copies stored on their backup drives get deleted shortly thereafter as well. “The recovery key password is deleted right away from the customer’s online profile. As the drives that are used for failover and backup are sync’d up with the latest data the keys are removed,” a Microsoft spokesperson assured me.

If you have sensitive data that’s stored on your laptop, in some cases it might be safer to completely stop using your old encryption key and generate a new one that you never send to Microsoft. This way you can be entirely sure that the copy that used to be on Microsoft’s server hasn’t already been compromised.

Generate a new encryption key without giving a copy to Microsoft
In order to generate a new disk encryption key, this time without giving a copy to Microsoft, you need decrypt your whole hard disk and then re-encrypt it, but this time in such a way that you’ll actually get asked how you want to backup your recover key.

This is only possible if you have Windows Pro or Enterprise. Unfortunately, the only thing you can do if you have the Home edition is upgrade to a more expensive edition or use non-Microsoft disk encryption software, such as BestCrypt, which you have to pay for. You may also be able to get open source encryption software like VeraCrypt working, but sadly the open source options for full disk encryption in Windows don’t currently work well with modern PC hardware (as touched on here).

Go to Start, type “bitlocker”, and click “Manage BitLocker” to open BitLocker Drive Encryption settings.

Image/photo

From here, click “Turn off BitLocker”. It will warn you that your disk will get decrypted and that it may take some time. Go ahead and continue. You can use your computer while it’s decrypting.

Image/photo

After your disk is finished decrypting, you need to turn BitLocker back on. Back in the BitLocker Drive Encryption settings, click “Turn on BitLocker”.

Image/photo

It will check to see if your computer supports BitLocker, and then it will ask you how you want to backup your recovery key. It sure would be nice if it asked you this when you first set up your computer.

Image/photo

If you choose to save it to a file, it will make you save it onto a disk that you’re not currently encrypting, such as a USB stick. Or you can choose to print it, and keep a hard copy. You must choose one of them to continue, but make sure you don’t choose “Save to your Microsoft account.”

On the next page it will ask you if you want to encrypt used disk space only (faster) or encrypt your entire disk including empty space (slower). If you want to be on the safe side, choose the latter. Then on the next page it will ask you if you wish to run the BitLocker system check, which you should probably do.

Finally, it will make you reboot your computer.

When you boot back up your hard disk will be encrypting in the background. At this point you can check your Microsoft account again to see if Windows uploaded your recovery key – it shouldn’t have.

Image/photo

Now just wait for your disk to finish encrypting. Congratulations: Your disk is encrypted and Microsoft no longer has the ability to unlock it.

The post Recently Bought a Windows Computer? Microsoft Probably Has Your Encryption Key appeared first on The Intercept.


#Microsoft #Windows #Key Escrow #Encryption #Clipper Chip #Security @Gadget Guru+
Marshall Sutherland
  
When I upgraded some computers to Win10 for someone the other day, I have a vague recollection of being asked for a Microsoft account, but not having one, I must have cancelled out of that.

Seth Martin
  
The InterceptThe Intercept wrote the following post Thu, 19 Feb 2015 13:25:38 -0600
How Spies Stole the Keys to the Encryption Castle

AMERICAN AND BRITISH spies hacked into the internal computer network of the largest manufacturer of SIM cards in the world, stealing encryption keys used to protect the privacy of cellphone communications across the globe, according to top-secret documents provided to The Intercept by National Security Agency whistleblower Edward Snowden.

The hack was perpetrated by a joint unit consisting of operatives from the NSA and its British counterpart Government Communications Headquarters, or GCHQ. The breach, detailed in a secret 2010 GCHQ document, gave the surveillance agencies the potential to secretly monitor a large portion of the world’s cellular communications, including both voice and data.

The company targeted by the intelligence agencies, Gemalto, is a multinational firm incorporated in the Netherlands that makes the chips used in mobile phones and next-generation credit cards. Among its clients are AT&T, T-Mobile, Verizon, Sprint and some 450 wireless network providers around the world. The company operates in 85 countries and has more than 40 manufacturing facilities. One of its three global headquarters is in Austin, Texas and it has a large factory in Pennsylvania.

In all, Gemalto produces some 2 billion SIM cards a year. Its motto is “Security to be Free.”

With these stolen encryption keys, intelligence agencies can monitor mobile communications without seeking or receiving approval from telecom companies and foreign governments. Possessing the keys also sidesteps the need to get a warrant or a wiretap, while leaving no trace on the wireless provider’s network that the communications were intercepted. Bulk key theft additionally enables the intelligence agencies to unlock any previously encrypted communications they had already intercepted, but did not yet have the ability to decrypt.

As part of the covert operations against Gemalto, spies from GCHQ — with support from the NSA — mined the private communications of unwitting engineers and other company employees in multiple countries.

Gemalto was totally oblivious to the penetration of its systems — and the spying on its employees. “I’m disturbed, quite concerned that this has happened,” Paul Beverly, a Gemalto executive vice president, told The Intercept. “The most important thing for me is to understand exactly how this was done, so we can take every measure to ensure that it doesn’t happen again, and also to make sure that there’s no impact on the telecom operators that we have served in a very trusted manner for many years. What I want to understand is what sort of ramifications it has, or could have, on any of our customers.” He added that “the most important thing for us now is to understand the degree” of the breach.

Leading privacy advocates and security experts say that the theft of encryption keys from major wireless network providers is tantamount to a thief obtaining the master ring of a building superintendent who holds the keys to every apartment. “Once you have the keys, decrypting traffic is trivial,” says Christopher Soghoian, the principal technologist for the American Civil Liberties Union. “The news of this key theft will send a shock wave through the security community.”

The massive key theft is “bad news for phone security. Really bad news.”


Beverly said that after being contacted by The Intercept, Gemalto’s internal security team began on Wednesday to investigate how their system was penetrated and could find no trace of the hacks. When asked if the NSA or GCHQ had ever requested access to Gemalto-manufactured encryption keys, Beverly said, “I am totally unaware. To the best of my knowledge, no.”

According to one secret GCHQ slide, the British intelligence agency penetrated Gemalto’s internal networks, planting malware on several computers, giving GCHQ secret access. We “believe we have their entire network,” the slide’s author boasted about the operation against Gemalto.

Additionally, the spy agency targeted unnamed cellular companies’ core networks, giving it access to “sales staff machines for customer information and network engineers machines for network maps.” GCHQ also claimed the ability to manipulate the billing servers of cell companies to “suppress” charges in an effort to conceal the spy agency’s secret actions against an individual’s phone. Most significantly, GCHQ also penetrated “authentication servers,” allowing it to decrypt data and voice communications between a targeted individual’s phone and their telecom provider’s network. A note accompanying the slide asserted that the spy agency was “very happy with the data so far and [was] working through the vast quantity of product.”

The Mobile Handset Exploitation Team (MHET), whose existence has never before been disclosed, was formed in April 2010 to target vulnerabilities in cell phones. One of its main missions was to covertly penetrate computer networks of corporations that manufacture SIM cards, as well as those of wireless network providers. The team included operatives from both GCHQ and the NSA.

While the FBI and other U.S. agencies can obtain court orders compelling U.S.-based telecom companies to allow them to wiretap or intercept the communications of their customers, on the international front this type of data collection is much more challenging. Unless a foreign telecom or foreign government grants access to their citizens’ data to a U.S. intelligence agency, the NSA or CIA would have to hack into the network or specifically target the user’s device for a more risky “active” form of surveillance that could be detected by sophisticated targets. Moreover, foreign intelligence agencies would not allow U.S. or U.K. spy agencies access to the mobile communications of their heads of state or other government officials.

“It’s unbelievable. Unbelievable,” said Gerard Schouw, a member of the Dutch Parliament when told of the spy agencies’ actions. Schouw, the intelligence spokesperson for D66, the largest opposition party in the Netherlands, told The Intercept, “We don’t want to have the secret services from other countries doing things like this.” Schouw added that he and other lawmakers will ask the Dutch government to provide an official explanation and to clarify whether the country’s intelligence services were aware of the targeting of Gemalto, whose official headquarters is in Amsterdam.

Last November, the Dutch government amended its constitution to include explicit protection for the privacy of digital communications, including those made on mobile devices. “We have, in the Netherlands, a law on the [activities] of secret services. And hacking is not allowed,” he said. Under Dutch law, the interior minister would have to sign off on such operations by foreign governments’ intelligence agencies. “I don’t believe that he has given his permission for these kind of actions.”

The U.S. and British intelligence agencies pulled off the encryption key heist in great stealth, giving them the ability to intercept and decrypt communications without alerting the wireless network provider, the foreign government or the individual user that they have been targeted. “Gaining access to a database of keys is pretty much game over for cellular encryption,” says Matthew Green, a cryptography specialist at the Johns Hopkins Information Security Institute. The massive key theft is “bad news for phone security. Really bad news.”

Image/photo



AS CONSUMERS BEGAN to adopt cellular phones en masse in the mid-1990s, there were no effective privacy protections in place. Anyone could buy a cheap device from RadioShack capable of intercepting calls placed on mobile phones. The shift from analog to digital networks introduced basic encryption technology, though it was still crackable by tech savvy computer science graduate students, as well as the FBI and other law enforcement agencies, using readily available equipment.

Today, second-generation (2G) phone technology, which relies on a deeply flawed encryption system, remains the dominant platform globally, though U.S. and European cell phone companies now use 3G, 4G and LTE technology in urban areas. These include more secure, though not invincible, methods of encryption, and wireless carriers throughout the world are upgrading their networks to use these newer technologies.

It is in the context of such growing technical challenges to data collection that intelligence agencies, such as the NSA, have become interested in acquiring cellular encryption keys. “With old-fashioned [2G], there are other ways to work around cellphone security without those keys,” says Green, the Johns Hopkins cryptographer. “With newer 3G, 4G and LTE protocols, however, the algorithms aren’t as vulnerable, so getting those keys would be essential.”

The privacy of all mobile communications — voice calls, text messages and internet access — depends on an encrypted connection between the cell phone and the wireless carrier’s network, using keys stored on the SIM, a tiny chip smaller than a postage stamp which is inserted into the phone. All mobile communications on the phone depend on the SIM, which stores and guards the encryption keys created by companies like Gemalto. SIM cards can be used to store contacts, text messages, and other important data, like one’s phone number. In some countries, SIM cards are used to transfer money. As The Intercept reported last year, having the wrong SIM card can make you the target of a drone strike.

SIM cards were not invented to protect individual communications — they were designed to do something much simpler: ensure proper billing and prevent fraud, which was pervasive in the early days of cell phones. Soghoian compares the use of encryption keys on SIM cards to the way Social Security numbers are used today. “Social security numbers were designed in the 1930s to track your contributions to your government pension,” he says. “Today they are used as a quasi national identity number, which was never their intended purpose.”

Because the SIM card wasn’t created with call confidentiality in mind, the manufacturers and wireless carriers don’t make a great effort to secure their supply chain. As a result, the SIM card is an extremely vulnerable component of a mobile phone. “I doubt anyone is treating those things very carefully,” says Green. “Cell companies probably don’t treat them as essential security tokens. They probably just care that nobody is defrauding their networks.” The ACLU’s Soghoian adds, “These keys are so valuable that it makes sense for intel agencies to go after them.”

As a general rule, phone companies do not manufacture SIM cards, nor program them with secret encryption keys. It is cheaper and more efficient for them to outsource this sensitive step in the SIM card production process. They purchase them in bulk with the keys pre-loaded by other corporations. Gemalto is the largest of these SIM “personalization” companies.

After a SIM card is manufactured, the encryption key, known as a “Ki,” is burned directly onto the chip. A copy of the key is also given to the cellular provider, allowing its network to recognize an individual’s phone. In order for the phone to be able to connect to the wireless carriers’ network, the phone — with the help of the SIM — authenticates itself using the Ki that has been programmed onto the SIM. The phone conducts a secret “handshake” that validates that the Ki on the SIM matches the Ki held by the mobile company. Once that happens, the communications between the phone and the network are encrypted. Even if GCHQ or the NSA were to intercept the phone signals as they are transmitted through the air, the intercepted data would be a garbled mess. Decrypting it can be challenging and time-consuming. Stealing the keys, on the other hand, is beautifully simple, from the intelligence agencies’ point of view, as the pipeline for producing and distributing SIM cards was never designed to thwart mass surveillance efforts.

One of the creators of the encryption protocol that is widely used today for securing emails, Adi Shamir, famously asserted: “Cryptography is typically bypassed, not penetrated.” In other words, it is much easier (and sneakier) to open a locked door when you have the key than it is to break down the door using brute force. While the NSA and GCHQ have substantial resources dedicated to breaking encryption, it is not the only way — and certainly not always the most efficient — to get at the data they want. “NSA has more mathematicians on its payroll than any other entity in the U.S.,” says the ACLU’s Soghoian. “But the NSA’s hackers are way busier than its mathematicians.”

GCHQ and the NSA could have taken any number of routes to steal SIM encryption keys and other data. They could have physically broken into a manufacturing plant. They could have broken into a wireless carrier’s office. They could have bribed, blackmailed or coerced an employee of the manufacturer or cell phone provider. But all of that comes with substantial risk of exposure. In the case of Gemalto, hackers working for GCHQ remotely penetrated the company’s computer network in order to steal the keys in bulk as they were en route to the wireless network providers.

SIM card “personalization” companies like Gemalto ship hundreds of thousands of SIM cards at a time to mobile phone operators across the world. International shipping records obtained by The Intercept show that in 2011, Gemalto shipped 450,000 smart cards from its plant in Mexico to Germany’s Deutsche Telekom in just one shipment.

In order for the cards to work and for the phones’ communications to be secure, Gemalto also needs to provide the mobile company with a file containing the encryption keys for each of the new SIM cards. These master key files could be shipped via FedEx, DHL, UPS or another snail mail provider. More commonly, they could be sent via email or through File Transfer Protocol, FTP, a method of sending files over the internet.

The moment the master key set is generated by Gemalto or another personalization company, but before it is sent to the wireless carrier, is the most vulnerable moment for interception. “The value of getting them at the point of manufacture is you can presumably get a lot of keys in one go, since SIM chips get made in big batches,” says Green, the cryptographer. “SIM cards get made for lots of different carriers in one facility.” In Gemalto’s case, GCHQ hit the jackpot, as the company manufactures SIMs for hundreds of wireless network providers, including all of the leading U.S. — and many of the largest European — companies.

But obtaining the encryption keys while Gemalto still held them required finding a way into the company’s internal systems.

Image/photo Diagram from a top-secret GCHQ slide.

TOP-SECRET GCHQ documents reveal that the intelligence agencies accessed the email and Facebook accounts of engineers and other employees of major telecom corporations and SIM card manufacturers in an effort to secretly obtain information that could give them access to millions of encryption keys. They did this by utilizing the NSA’s X-KEYSCORE program, which allowed them access to private emails hosted by the SIM card and mobile companies’ servers, as well as those of major tech corporations, including Yahoo! and Google.

In effect, GCHQ clandestinely cyberstalked Gemalto employees, scouring their emails in an effort to find people who may have had access to the company’s core networks and Ki-generating systems. The intelligence agency’s goal was to find information that would aid in breaching Gemalto’s systems, making it possible to steal large quantities of encryption keys. The agency hoped to intercept the files containing the keys as they were transmitted between Gemalto and its wireless network provider customers.

GCHQ operatives identified key individuals and their positions within Gemalto and then dug into their emails. In one instance, GCHQ zeroed in on a Gemalto employee in Thailand who they observed sending PGP-encrypted files, noting that if GCHQ wanted to expand its Gemalto operations, “he would certainly be a good place to start.” They did not claim to have decrypted the employee’s communications, but noted that the use of PGP could mean the contents were potentially valuable.

The cyberstalking was not limited to Gemalto. GCHQ operatives wrote a script that allowed the agency to mine the private communications of employees of major telecommunications and SIM “personalization” companies for technical terms used in the assigning of secret keys to mobile phone customers. Employees for the SIM card manufacturers and wireless network providers were labeled as “known individuals and operators targeted” in a top-secret GCHQ document.

According to that April 2010 document, “PCS Harvesting at Scale,” hackers working for GCHQ focused on “harvesting” massive amounts of individual encryption keys “in transit between mobile network operators and SIM card personalisation centres” like Gemalto. The spies “developed a methodology for intercepting these keys as they are transferred between various network operators and SIM card providers.” By that time, GCHQ had developed “an automated technique with the aim of increasing the volume of keys that can be harvested.”

The PCS Harvesting document acknowledged that, in searching for information on encryption keys, GCHQ operatives would undoubtedly vacuum up “a large number of unrelated items” from the private communications of targeted employees. “[H]owever an analyst with good knowledge of the operators involved can perform this trawl regularly and spot the transfer of large batches of [keys].”

The document noted that many SIM card manufacturers transferred the encryption keys to wireless network providers “by email or FTP with simple encryption methods that can be broken…or occasionally with no encryption at all.” To get bulk access to encryption keys, all the NSA or GCHQ needed to do was intercept emails or file transfers as they were sent over the internet — something both agencies already do millions of times per day. A footnote in the 2010 document observed that the use of “strong encryption products…is becoming increasingly common” in transferring the keys.

In its key harvesting “trial” operations in the first quarter of 2010, GCHQ successfully intercepted keys used by wireless network providers in Iran, Afghanistan, Yemen, India, Serbia, Iceland and Tajikistan. But, the agency noted, its automated key harvesting system failed to produce results against Pakistani networks, denoted as “priority targets” in the document, despite the fact that GCHQ had a store of Kis from two providers in the country, Mobilink and Telenor. “t is possible that these networks now use more secure methods to transfer Kis,” the document concluded.

From December 2009 through March 2010, a month before the Mobile Handset Exploitation Team was formed, GCHQ conducted a number of trials aimed at extracting encryption keys and other personalized data for individual phones. In one two-week period, they accessed the emails of 130 people associated with wireless network providers or SIM card manufacturing and personalization. This operation produced nearly 8,000 keys matched to specific phones in 10 countries. In another two-week period, by mining just 6 email addresses, they produced 85,000 keys. At one point in March 2010, GCHQ intercepted nearly 100,000 keys for mobile phone users in Somalia. By June, they’d compiled 300,000. “Somali providers are not on GCHQ’s list of interest,” the document noted. “[H]owever, this was usefully shared with NSA.”

The GCHQ documents only contain statistics for three months of encryption key theft in 2010. During this period, millions of keys were harvested. The documents stated explicitly that GCHQ had already created a constantly evolving automated process for bulk harvesting of keys. They describe active operations targeting Gemalto’s personalization centers across the globe, as well as other major SIM card manufacturers and the private communications of their employees.

A top-secret NSA document asserted that, as of 2009, the U.S. spy agency already had the capacity to process between 12 and 22 million keys per second for later use against surveillance targets. In the future, the agency predicted, it would be capable of processing more than 50 million per second. The document did not state how many keys were actually processed, just that the NSA had the technology to perform such swift, bulk operations. It is impossible to know how many keys have been stolen by the NSA and GCHQ to date, but, even using conservative math, the numbers are likely staggering.

GCHQ assigned “scores” to more than 150 individual email addresses based on how often the users mentioned certain technical terms, and then intensified the mining of those individuals’ accounts based on priority. The highest scoring email address was that of an employee of Chinese tech giant Huawei, which the U.S. has repeatedly accused of collaborating with Chinese intelligence. In all, GCHQ harvested the emails of employees of hardware companies that manufacture phones, such as Ericsson and Nokia; operators of mobile networks, such as MTN Irancell and Belgacom; SIM card providers, such as Bluefish and Gemalto; and employees of targeted companies who used email providers such as Yahoo! and Google. During the three-month trial, the largest number of email addresses harvested were those belonging to Huawei employees, followed by MTN Irancell. The third largest class of emails harvested in the trial were private Gmail accounts, presumably belonging to employees at targeted companies.

The GCHQ program targeting Gemalto was called DAPINO GAMMA. In 2011, GCHQ launched operation HIGHLAND FLING to mine the email accounts of Gemalto employees in France and Poland. A top-secret document on the operation stated that one of the aims was “getting into French HQ” of Gemalto “to get in to core data repositories.” France, home to one of Gemalto’s global headquarters, is the nerve center of the company’s worldwide operations. Another goal was to intercept private communications of employees in Poland that “could lead to penetration into one or more personalisation centers” — the factories where the encryption keys are burned onto SIM cards.

As part of these operations, GCHQ operatives acquired the usernames and passwords for Facebook accounts of Gemalto targets. An internal top-secret GCHQ wiki on the program from May 2011 indicated that GCHQ was in the process of “targeting” more than a dozen Gemalto facilities across the globe, including in Germany, Mexico, Brazil, Canada, China, India, Italy, Russia, Sweden, Spain, Japan and Singapore.

The document also stated that GCHQ was preparing similar key theft operations against one of Gemalto’s competitors, Germany-based SIM card giant Giesecke and Devrient.

On January 17, 2014, President Barack Obama gave a major address on the NSA spying scandal. “The bottom line is that people around the world, regardless of their nationality, should know that the United States is not spying on ordinary people who don’t threaten our national security and that we take their privacy concerns into account in our policies and procedures,” he said.

The monitoring of the lawful communications of employees of major international corporations shows that such statements by Obama, other U.S. officials and British leaders — that they only intercept and monitor the communications of known or suspected criminals or terrorists — were untrue. “The NSA and GCHQ view the private communications of people who work for these companies as fair game,” says the ACLU’s Soghoian. “These people were specifically hunted and targeted by intelligence agencies, not because they did anything wrong, but because they could be used as a means to an end.”

Image/photo



THERE ARE TWO basic types of electronic or digital surveillance: passive and active. All intelligence agencies engage in extensive passive surveillance, which means they collect bulk data by intercepting communications sent over fiber optic cables, radio waves or wireless devices.

Intelligence agencies place high power antennas, known as “spy nests,” on the top of their countries’ embassies and consulates, which are capable of vacuuming up data sent to or from mobile phones in the surrounding area. The joint NSA/CIA Special Collection Service is the lead entity that installs and mans these nests for the United States. An embassy situated near a parliament or government agency could easily intercept the phone calls and data transfers of the mobile phones used by foreign government officials. The U.S. embassy in Berlin, for instance, is located a stone’s throw from the Bundestag. But if the wireless carriers are using stronger encryption, which is built into modern 3G, 4G and LTE networks, then intercepted calls and other data would be more difficult to crack, particularly in bulk. If the intelligence agency wants to actually listen to or read what is being transmitted, they would need to decrypt the encrypted data.

Active surveillance is another option. This would require government agencies to “jam” a 3G or 4G network, forcing nearby phones onto 2G. Once forced down to the less secure 2G technology, the phone can be tricked into connecting to a fake cell tower operated by an intelligence agency. This method of surveillance, though effective, is risky, as it leaves a digital trace that counter-surveillance experts from foreign governments could detect.

Stealing the Kis solves all of these problems. This way, intelligence agencies can safely engage in passive, bulk surveillance without having to decrypt data and without leaving any trace whatsoever.

“Key theft enables the bulk, low-risk surveillance of encrypted communications,” the ACLU’s Soghoian says. “Agencies can collect all the communications and then look through them later. With the keys, they can decrypt whatever they want, whenever they want. It’s like a time machine, enabling the surveillance of communications that occurred before someone was even a target.”

Neither the NSA nor GCHQ would comment specifically on the key theft operations. In the past, they have argued more broadly that breaking encryption is a necessary part of tracking terrorists and other criminals. “It is longstanding policy that we do not comment on intelligence matters,” a GCHQ official stated in an email, adding that the agency’s work is conducted within a “strict legal and policy framework” that ensures its activities are “authorized, necessary and proportionate,” with proper oversight, which is the standard response the agency has provided for previous stories published by [i]The Intercept
. The agency also said, “[T]he UK’s interception regime is entirely compatible with the European Convention on Human Rights.” The NSA declined to offer any comment.

It is unlikely that GCHQ’s pronouncement about the legality of its operations will be universally embraced in Europe. “It is governments massively engaging in illegal activities,” says Sophie in’t Veld, a Dutch member of the European Parliament. “If you are not a government and you are a student doing this, you will end up in jail for 30 years.” Veld, who chaired the European Parliament’s recent inquiry into mass surveillance exposed by Snowden, told The Intercept: “The secret services are just behaving like cowboys. Governments are behaving like cowboys and nobody is holding them to account.”

The Intercept’s Laura Poitras has previously reported that in 2013 Australia’s signals intelligence agency, a close partner of the NSA, stole some 1.8 million encryption keys from an Indonesian wireless carrier.

A few years ago, the FBI reportedly dismantled several of transmitters set up by foreign intelligence agencies around the Washington DC area, which could be used to intercept cell phone communications. Russia, China, Israel and other nations use similar technology as the NSA across the world. If those governments had the encryption keys for major U.S. cell phone companies’ customers, such as those manufactured by Gemalto, mass snooping would be simple. “It would mean that with a few antennas placed around Washington DC, the Chinese or Russian governments could sweep up and decrypt the communications of members of Congress, U.S. agency heads, reporters, lobbyists and everyone else involved in the policymaking process and decrypt their telephone conversations,” says Soghoian.

“Put a device in front of the UN, record every bit you see going over the air. Steal some keys, you have all those conversations,” says Green, the Johns Hopkins cryptographer. And it’s not just spy agencies that would benefit from stealing encryption keys. “I can only imagine how much money you could make if you had access to the calls made around Wall Street,” he adds.

Image/photo GCHQ slide.

THE BREACH OF Gemalto’s computer network by GCHQ has far-reaching global implications. The company, which brought in $2.7 billion in revenue in 2013, is a global leader in digital security, producing banking cards, mobile payment systems, two-factor authentication devices used for online security, hardware tokens used for securing buildings and offices, electronic passports and identification cards. It provides chips to Vodafone in Europe and France’s Orange, as well as EE, a joint venture in the U.K. between France Telecom and Deutsche Telekom. Royal KPN, the largest Dutch wireless network provider, also uses Gemalto technology.

In Asia, Gemalto’s chips are used by China Unicom, Japan’s NTT and Taiwan’s Chungwa Telecom, as well as scores of wireless network providers throughout Africa and the Middle East. The company’s security technology is used by more than 3,000 financial institutions and 80 government organizations. Among its clients are Visa, Mastercard, American Express, JP Morgan Chase and Barclays. It also provides chips for use in luxury cars, including those made by Audi and BMW.

In 2012, Gemalto won a sizable contract, worth $175 million, from the U.S. government to produce the covers for electronic U.S. passports, which contain chips and antennas that can be used to better authenticate travelers. As part of its contract, Gemalto provides the personalization and software for the microchips implanted in the passports. The U.S. represents Gemalto’s single largest market, accounting for some 15 percent of its total business. This raises the question of whether GCHQ, which was able to bypass encryption on mobile networks, has the ability to access private data protected by other Gemalto products created for banks and governments.

As smart phones become smarter, they are increasingly replacing credit cards and cash as a means of paying for goods and services. When Verizon, AT&T and T-Mobile formed an alliance in 2010 to jointly build an electronic pay system to challenge Google Wallet and Apple Pay, they purchased Gemalto’s technology for their program, known as Softcard. (Until July 2014, it previously went by the unfortunate name of “ISIS Mobile Wallet.”) Whether data relating to that, and other Gemalto security products, has been compromised by the GCHQ and NSA is unclear. Both intelligence agencies declined to answer any specific questions for this story.

Image/photo Signal, iMessage, WhatsApp, Silent Phone.

PRIVACY ADVOCATES and security experts say it would take billions of dollars, significant political pressure, and several years to fix the fundamental security flaws in the current mobile phone system that NSA, GCHQ and other intelligence agencies regularly exploit.

A current gaping hole in the protection of mobile communications is that cell phones and wireless network providers do not support the use of Perfect Forward Security (PFS), a form of encryption designed to limit the damage caused by theft or disclosure of encryption keys. PFS, which is now built into modern web browsers and used by sites like Google and Twitter, works by generating unique encryption keys for each communication or message, which are then discarded. Rather than using the same encryption key to protect years’ worth of data, as the permanent Kis on SIM cards can, a new key might be generated each minute, hour or day, and then promptly destroyed. Because cell phone communications do not utilize PFS, if an intelligence agency has been “passively” intercepting someone’s communications for a year and later acquires the permanent encryption key, it can go back and decrypt all of those communications. If mobile phone networks were using PFS, that would not be possible — even if the permanent keys were later stolen.

The only effective way for individuals to protect themselves from Ki theft-enabled surveillance is to use secure communications software, rather than relying on SIM card-based security. Secure software includes email and other apps that use Transport Layer Security (TLS), the mechanism underlying the secure HTTPS web protocol. The email clients included with Android phones and iPhones support TLS, as do large email providers like Yahoo! and Google.

Apps like TextSecure and Silent Text are secure alternatives to SMS messages, while Signal, RedPhone and Silent Phone encrypt voice communications. Governments still may be able to intercept communications, but reading or listening to them would require hacking a specific handset, obtaining internal data from an email provider, or installing a bug in a room to record the conversations.

“We need to stop assuming that the phone companies will provide us with a secure method of making calls or exchanging text messages,” says Soghoian.

———

Documents published with this article:———

Additional reporting by Andrew Fishman and Ryan Gallagher. Sheelagh McNeill, Morgan Marquis-Boire, Alleen Brown, Margot Williams, Ryan Devereaux and Andrea Jones contributed to this story.

The post How Spies Stole the Keys to the Encryption Castle appeared first on The Intercept.


#Snowden #Encryption #Privacy #Spies #Spying #NSA #GCHQ #Snooping #Surveillance #Communications #Security @LibertyPod+ @Gadget Guru+

Seth Martin
  last edited: Mon, 19 May 2014 23:47:27 -0500  
Mandatory Encryption on XMPP Starts Today

Last year Peter Saint-Andre laid out a plan for strengthening the security of the XMPP network. The manifesto, to date signed by over 70 XMPP service operators and software developers, offered a rallying point for those interested in ensuring the security of XMPP for its users.

Today is the date that the manifesto gave for the final 'flip of the switch': as of today many XMPP services will begin refusing unencrypted connections. If you run an XMPP service, we encourage you to do the same. On the xmpp.org wiki you can find instructions for all the popular XMPP server software. While XMPP is an open distributed network, obviously no single entity can "mandate" encryption for the whole network - but as a group we are moving in the right direction.

If you use an XMPP service provided by someone else and you encounter problems contacting family, friends or colleagues starting from today, it may be a sign that either your XMPP service or theirs is not properly supporting encryption. Contact the administrator of your service and let them know about this change. You can also use xmpp.net to test any server.

We still have some way to go, for example today's change only ensures encryption (enough to beat passive capturing of traffic), it does not require you to have a valid certificate issued by a certificate authority (though some services do already choose to require this).

There is a whole lot of work being done to pave the way for a future without CAs, as they are a sticking point for many people - whether for financial, trust, privacy or philosophical reasons. Some current initiatives include DNSSEC, Monkeysphere, and some folks prefer to trust nothing less than hand-verified fingerprints! We already have experimental plugins available in prosody-modules for these things (mod_s2s_auth_dane, mod_s2s_auth_monkeysphere, mod_s2s_auth_fingerprint, etc.). If this is something you are interested in, take a look, help us test, and perhaps contribute code even!

#XMPP #Encryption

Seth Martin
  last edited: Mon, 31 Mar 2014 13:15:51 -0500  
The U.S. National Security Agency managed to have security firm RSA adopt not just one, but two security tools, further facilitating NSA eavesdropping on Internet communications. The newly discovered software is dubbed 'Extended Random', and is intended to facilitate the use of the already known 'Dual Elliptic Curve' encryption software's back door. Researchers from several U.S. universities discovered Extended Random and assert it could help crack Dual Elliptic Curve encrypted communications 'tens of thousands of times faster'.

Exclusive: NSA infiltrated RSA security more deeply than thought - study

Image/photo

SAN FRANCISCO (Reuters) - Security industry pioneer RSA adopted not just one but two encryption tools developed by the U.S. National Security Agency, greatly increasing the spy agency's ability to eavesdrop on some Internet communications, according to a team of academic researchers.


#NSA #Spying #Snooping #Liberty #Infiltration #Privacy #RSA #Encryption #Security #Surveillance #Computer Security @LibertyPod
Mike Macgirvin
  
The general concensus was that dual elliptic curve was dead the second the first revelations came out.

Seth Martin
  
Exclusive: Secret contract tied NSA and security industry pioneer

Image/photo

SAN FRANCISCO (Reuters) - As a key part of a campaign to embed encryption software that it could crack into widely used computer products, the U.S. National Security Agency arranged a secret $10 million contract with RSA, one of the most influential firms in the computer security industry, Reuters has learned.


#RSA #Bsafe #Backdoor #Encryption #NSA #Spying #Snooping #Snowden #Computer Security #Privacy #Freedom #Liberty #Security @LibertyPod

Seth Martin
  
The TQP Development v. Newegg trial has ended, and the case is now in the hands of an eight-person jury in Marshall, TX.

TQP claims that a patent invented by a man named Michael Jones covers a vast swath of the Internet: any website, in fact, using the common encryption combination of SSL combined with the RC4 cipher. The patent-holding company owned by Erich Spangenberg, derided by its critics as a "patent troll," has collected $45 million from 139 companies. Online retailer Newegg won't pay and has pushed the case to a jury.

Now, the jury has heard opposing lawyers from the two teams wrap up their cases. The case is in the jury's hands. The jury has already stayed over the weekend, so it wouldn't be surprising for a verdict to come this evening.

Newegg trial: Crypto legend takes the stand, goes for knockout patent punch

Image/photo

Taking a bet on Whit Diffie, as the trial against "patent troll" TQP wraps up Monday.

MARSHALL, TX—Newegg's courtroom face-off with patent-licensing giant TQP Development is nearing its end. TQP has sued hundreds of companies, saying it has patented the common Web encryption scheme of combining SSL with the RC4 cipher. Almost 140 companies have paid TQP a total of more than $45 million. But online retailer Newegg, which has sworn not to settle with "patent trolls" like TQP, took the case to a jury.


#Cryptography #Whit Diffie #SSL #Encryption #RC4 #Diffie #TQP #Patent Troll #Newegg #Court #Trial #E-Commerce #Patents @LibertyPod
Mike Macgirvin
  
And I have to add that (PK encryption) to the list of things that came out of my old Stanford lab - though it was a few years before my tenure there. Just little stuff - like IMAP mail, and the mouse. and Cisco. Oh and Sun Microsystems. Oh and the freaking internet itself. Yahoo came out of the business school - which is probably appropriate (as an example of how to be a shooting star). And Google came out of the CS department - which is also probably appropriate.

We stand on the shoulders of giants.

Had a tour this morning of Wollongong's Engineering department and it just wasn't the same. You couldn't say things like "here's Sally Ride's office" or "here's where the klystron was invented". Or here was the first adult heart transplant or the first atom smasher was built here (which was later my office, radiation included) .  It just wasn't quite the same.  They're nice labs - don't get me wrong. But there's no "changing the world" kind of history that you could attach to it.
Arto
  
I suspect such history is not being written at Stanford these days either.  Different political climate, gone is the Cold War, where the U.S. had to stay ahead of the Soviets in every scientific measure, thus the massive investment in basic science.

Now, the great universities labs are little more than corporate extensions, optimized for a quick IPO.  Get rick (of someone else's work) and run!

When Steve Jobs died, it was news for days, the President spoke about it.  When Dennis Ritchie died the mainstream news was mute.

Just talked to my best friend: he does computer-assisted music improvisation, and anything related to cutting edge audio processing and algorithmic composition, robots, etc. He said the high point for funding research in computer science application of audio was between 1945 and 1980.  It gave us synthesizers, the modern recording industry, not to mention the scientific results in psycho-acoustics, etc. Now, funding has been slashed for the 20th consecutive year in places like UC San Diego and Stanford's CCRMA labs.
Seth Martin
  
Well, unfortunately the jury has ruled in favor of TQP Development on this case, ordering Newegg to pay $2.3 million, a bit less than half of the $5.1 million TQP's damage expert suggested.
Newegg has budgeted for appeal from the start and still plans to take this case up on appeal.

Seth Martin
  
We've seen it argued that privacy is a bad thing. People like former DHS official Stewart Baker have argued that the privacy-protecting efforts of civil liberties activists are the reason we're forced to be fondled and de-shod at TSA checkpoints. Not only that, he's tried to blame the 9/11 attacks on "rise of civil libertarianism." Unbelievably, we've also had a politician recently claim that your privacy isn't violated if you don't notice the violation.

We've also seen attacks on anonymity by (anonymous) police officers and a whole slew of pundits and politicians who believe the only thing online anonymity does is provide a shield for trolls, bullies and pirates to hide behind. Efforts have been made to outlaw online anonymity, but fortunately, very few laws have been passed.

Now, try wrapping your mind around this argument being made by Art Coviello, executive chairman of RSA Security and the head of EMC's security division. According to him, anonymity and privacy are at odds with each other.

A dogmatic allegiance to anonymity is threatening privacy, according to Art Coviello, executive chairman of RSA.

Coviello cast anonymity as the "enemy of privacy" because it gives "free reign to our networks to adversaries" with "no risk of discovery or prosecution."


On one hand, anonymity is slowing down the pursuit of online criminals. On the other hand, companies are increasingly wary of subjecting their employees to intrusive security software.

Customers are caught in a Catch-22. They're afraid to deploy technology for fear of violating workers' privacy" even though security intelligence tools are ultimately the best way to protect personal information, Coviello argued.


How Coviello arrives at the conclusion that anonymity is damaging privacy isn't exactly clear. It may be the enemy to security (or at least, unhelpful to retributive actions), but the online anonymity shielding crooks doesn't threaten users' privacy, at least not directly. Indirectly it could, but it wouldn't be anonymity's "fault." If Coviello wants attackers to be stripped of anonymity, there's little doubt he'd like to see clients' employees stripped of their privacy. Both would make his companies' jobs easier. Attackers would be easily identified and clients would received (arguably) better protection (thanks to more, non-anonymized data gathering). Win-win for security. Not so much for those who cherish privacy and anonymity.

This isn't exactly new ground for Coviello. He did some complaining about privacy at last year's RSA conference as well.

RSA executive chairman Art Coviello has criticised privacy advocates for basing their arguments on “dangerous reasoning”, comments that have already earned him a tongue lashing from Big Brother Watch and the Open Rights Group.

Coviello, whilst noting the need for privacy, lambasted privacy groups’ “knee jerk” reactions to public and private sector attempts to improve people’s security, pointing to the “insanity” of the situation, in a keynote to open the RSA 2012 conference in London this morning.
In Coviello’s view, privacy advocates are over-reacting to measures designed to protect online identities, preferring to live in a world of danger: “Because privacy advocates don’t realise that safeguards can be implemented, they think we must expect reasonable danger to protect our freedoms,” Coviello said.

“But this is based on dangerous reasoning, a knee jerk reaction, without understanding the severity and scope of the problem.

“Where is it written that cyber criminals can steal our identities but any industry action to protect us invites cries of Big Brother.”


Not for nothing has someone noted that RSA is only a letter away from the United States' most notorious intelligence agency.

Coviello's arguments here aren't that much different than the government's opinions on the "liberty vs. security" balance. And like other defenders of intrusive programs, Coviello refers to the statements of critics as an "over-reaction." But is it? He bristles at being compared to Big Brother but his thought processes roughly align with the government's foremost proponents of intrusive programs. According to both, people just don't understand how bad things actually are, and in our unenlightened state, we're making the wrong choice between security and liberty.

Additionally, the "knee jerk reaction" he sees in privacy activists is, in reality, no different than the knee jerk reactions he fails to see in security and intelligence entities. While privacy activists are focused on retaining what's remaining and make small pushes for more, security/intelligence agencies leverage every tragedy or attack to expand their scope and dial back privacy protections.

But where his argument against privacy (and anonymity) ultimately falls apart is in his belief that collecting and storing large amounts of private data is the best solution for all involved.

To “suggest the only way to protect against cyber crime is to sacrifice privacy and civil liberties is absurd,” Nick Pickles, director of privacy campaign group Big Brother Watch, told TechWeekEurope. “It is a simple fact that if data has not been collected, it cannot be stolen, lost or misused. The best safeguard for consumers and businesses is for data not to be collected unless it is absolutely essential, and then deleted as soon as it is no longer required.”


As for his complaints about anonymity? It's pretty much all or nothing. You can't whip up statutes and laws that allow anonymity and their privacy protections unless you're a criminal. Either you take the good with the bad or you eliminate it for everybody. No one's going to agree with that last one, so security groups and companies will just have to deal with the fact that their adversaries will be cloaking their identities. Cops may wish robbers wouldn't wear masks when committing crime, but that's the way it goes. You can't ban the sale of masks simply because someone holds up a bank wearing one.

I'm sure he understands this, but he's in a field where security is valued over privacy. But that's the expected mindset for someone is his position. The problem is that those with his mindset expect others to come to the same conclusion -- and when they don't, they're portrayed as part of the problem.

To be fair, Coviello at least had this to say about the jargon being deployed by government security officials and advisors.

"I absolutely hate the term 'Cyber Pearl Harbor'," he said. "I just think it's a poor metaphor to describe the state we are really in. What do I do differently once I've heard it? And I've been hearing it for 10 years now. To trigger a physically destructive event solely from the internet might not be impossible, but it is still, as of today, highly unlikely."


Coviello may not like this particular FUD, but claiming anonymity and privacy are standing in the way of security isn't that far removed from the panicky assertions of the "cyber Pearl Harbor" types.

Source

#Anonymity #Privacy #Freedom #Liberty #RSA #Encryption #Security #Intelligence @LibertyPod
Sean Tilley
  
WAR IS PEACE
FREEDOM IS SLAVERY
IGNORANCE IS STRENGTH
Seth Martin
  
OliverOliver wrote the following post Thu, 31 Oct 2013 05:38:41 -0500

Dark Mail Alliance

Image/photo

Privacy Innovators
Silent Circle and Lavabit are developing a new way to do email with end-to-end encryption. We welcome like-minded organizations to join our alliance.

Our Mission
To bring the world our unique end-to-end encrypted protocol and architecture that is the 'next-generation' of private and secure email. As founding partners of The Dark Mail Alliance, both Silent Circle and Lavabit will work to bring other members into the alliance, assist them in implementing the new protocol and jointly work to proliferate the worlds first end-to-end encrypted 'Email 3.0' throughout the world's email providers. Our goal is to open source the protocol and architecture and help others implement this new technology to address privacy concerns against surveillance and back door threats of any kind.


(Piwik is running in background, no OptOut?)

#Privacy #Lavabit #Silent Circle #Dark Mail Alliance #Liberty #Freedom #E-Mail #Communication #Encryption @LibertyPod
Mike Macgirvin
  
It's only ISPs who draw a distinction between clients and servers. But realistically if you want to provide web resources, you probably want them served from the "wild west" side of your home router and not the side that's locked down. You also probably want to keep this machine turned on so your photo albums are available to friends in Australia that are awake when you're sleeping (if you have any of these). SIP and XMPP are solving different problems. We want to share web resources. That's what we're providing. Photos, files, events, *and* conversations. If it was just text comments about what we had for dinner with one or two people that are always online when we are, one might make different protocol decisions.
Seth Martin
  last edited: Tue, 05 Nov 2013 13:52:08 -0600  
Dark Mail Initiative is on Kickstarter now.

http://www.kickstarter.com/projects/ladar/lavabits-dark-mail-initiative

The goal is to cleanup and release the source code that was used to power Lavabit as a f/oss project with support for dark mail added.
Oliver
  last edited: Wed, 06 Nov 2013 02:03:45 -0600  
Moxie Marlinspike >> Blog >> A Critique Of Lavabit

In August of this year, Ladar Levison shut down his email service, Lavabit, in an attempt to avoid complying with a US government request for his users’ emails. To defy the US government’s gag order and shut down his service took great courage, and I believe that Ladar deserves our support in his legal defense of that decision. There is now an ...