cover photo

Seth Martin

seth@lastauth.com

Seth Martin
  last edited: Fri, 21 Apr 2017 18:02:48 -0500  
Once more, with passion: Fingerprints suck as passwords

Biometric data is identity (public), never authentication (secret). You leave a copy of your fingerprints literally on everything you touch.


#Privacy #Security #Passwords #Cybersecurity #Biometrics @Gadget Gurus+ @LibertyPod+
cb7f604332cf39
  
So while it's easy to update your password or get a new credit card number, you can't get a new finger.

https://www.schneier.com/blog/archives/2015/10/stealing_finger.html

and 10 years ago CCC showed how to fake a fingerprint with superglue and wood glue easily:
https://www.youtube.com/watch?v=OPtzRQNHzl0 sorry video is in german.
prep
  
But (!) fingerprints work well in allowing security agencies to track you around.

I believe That is the reason for the push for bio-metrics and fingerprint scanners, in particular.

I have doubt in most security things; originating from Facebook, Apple, Google or Microsoft.

Seth Martin
  
Techdirt.Techdirt. wrote the following post Tue, 04 Apr 2017 08:23:00 -0500

AT&T, Comcast & Verizon Pretend They Didn't Just Pay Congress To Sell You Out On Privacy

Large ISPs like AT&T, Verizon and Comcast spent a significant part of Friday trying to convince the press and public that they didn't just screw consumers over on privacy (if you've been napping: they did). With the vote on killing FCC broadband privacy protections barely in the books, ISP lobbyists and lawyers penned a number of editorials and blog posts breathlessly professing their tireless dedication to privacy, and insisting that worries about the rules' repeal are little more than "misinformation."

All of these posts, in lock step, tried to effectively make three key arguments: that the FTC will rush in to protect consumers in the wake of the FCC rules being repealed (not happening), ISPs don't really collect much data on you anyway (patently untrue), and that ISPs' lengthy, existing privacy policies and history of consumer respect mean consumers have nothing to worry about (feel free to pause here and laugh).

For more than a decade, large ISPs have used deep-packet inspection, search engine redirection and clickstream data collection to build detailed user profiles, and their longstanding refusal to candidly talk about many of these programs should make their actual dedication to user privacy abundantly clear. Yet over at Comcast, Deputy General Counsel & Chief Privacy Officer Gerard Lewis spent some time complaining that consumer privacy concerns are little more than "misleading talk" and "misinformation and inaccurate statements":

"There has been a lot of misleading talk about how the congressional action this week to overturn the regulatory overreach of the prior FCC will now permit us to sell sensitive customer data without customers’ knowledge or consent. This is just not true. In fact, we have committed not to share our customers’ sensitive information (such as banking, children’s, and health information), unless we first obtain their affirmative, opt-in consent."

So one, the "commitment" Comcast links to in this paragraph is little more than a cross-industry, toothless and voluntary self-regulatory regime that means just a fraction more than nothing at all. And while Comcast insists it doesn't sell its broadband customers' "individual web browsing history" (yet), they do still collect an ocean of other data for use in targeted ads, and there's really little stopping them from using your browsing history in this same way down the road -- it may not be "selling" your data, but it is using it to let advertisers target you. Comcast proceeds to say it's updating its privacy policy in the wake of the changes -- as if such an action (since these policies are drafted entirely to protect the ISP, not the consumer) means anything at all.

Like Comcast, Verizon's blog post on the subject amusingly acts as if the company's privacy policy actually protects you, not Verizon:

"Verizon is fully committed to the privacy of our customers. We value the trust our customers have in us so protecting the privacy of customer information is a core priority for us. Verizon’s privacy policy clearly lays out what we do and don’t do as well as the choices customers can make."

Feel better? That's the same company, we'll note, that was caught covertly modifying user data packets to track users around the internet regardless of any other data collected. That program was in place for two years before security researchers even noticed it existed. It took another six months of public shaming before the company even provided the option for consumers to opt out. Verizon's own recent history makes it clear its respect for consumer privacy is skin deep. And again, there's nothing really stopping Verizon from expanding this data collection and sales down the road, and burying it on page 117 of its privacy policy.

AT&T was a bit more verbose in a post over at the AT&T policy blog, where again it trots out this idea that existing FTC oversight is somehow good enough:

"The reality is that the FCC’s new broadband privacy rules had not yet even taken effect. And no one is saying there shouldn’t be any rules. Supporters of this action all agree that the rescinded FCC rules should be replaced by a return to the long-standing Federal Trade Commission approach. But in today’s overheated political dialogue, it is not surprising that some folks are ignoring the facts."

So again, the FTC doesn't really have much authority over broadband, and AT&T forgets to mention that its lawyers have found ways to wiggle around what little authority the agency does have via common carrier exemptions. And while AT&T insists that "no one is saying there shouldn't be any rules," its lobbyists are working tirelessly to accomplish precisely that by gutting both FTC and FCC oversight of the telecom sector. Not partially. Entirely. Title II, net neutrality, privacy -- AT&T wants it all gone. Its pretense to the contrary is laughable.

Like the other two providers, AT&T trots out this idea that the FCC's rules weren't fair because they didn't also apply to "edge" companies like Facebook or Google (which actually are more fully regulated by the FTC). That's a flimsy point also pushed by an AT&T and US Telecom Op/Ed over at Axios, where the lobbying group's CEO Jonathan Spalter tries to argue that consumers shouldn't worry about ISPs, because their data is also being hoovered up further down the supply chain:

"Your browser history is already being aggregated and sold to advertising networks—by virtually every site you visit on the internet. Consumers' browsing history is bought and sold across massive online advertising networks every day. This is the reason so many popular online destinations and services are "free." And, it's why the ads you see on your favorite sites—large and small—always seem so relevant to what you've recently been shopping for online. Of note, internet service providers are relative bit players in the $83 billion digital ad market, which made singling them out for heavier regulations so suspect."

Again, this quite intentionally ignores the fact that whereas you can choose to not use Facebook or Gmail, a lack of competition means you're stuck with your broadband provider. As such, arguing that "everybody else is busy collecting your data" isn't much of an argument, especially when "everybody else" is having their behaviors checked by competitive pressure to offer a better product. As well-respected security expert Bruce Schneier points out in a blog post, these companies desperately want you to ignore this one, central, undeniable truth:

"When markets work well, different companies compete on price and features, and society collectively rewards better products by purchasing them. This mechanism fails if there is no competition, or if rival companies choose not to compete on a particular feature. It fails when customers are unable to switch to competitors. And it fails when what companies do remains secret.

Unlike service providers like Google and Facebook, telecom companies are infrastructure that requires government involvement and regulation. The practical impossibility of consumers learning the extent of surveillance by their Internet service providers, combined with the difficulty of switching them, means that the decision about whether to be spied on should be with the consumer and not a telecom giant. That this new bill reverses that is both wrong and harmful."

This lack of competition didn't just magically happen. As in other sectors driven by legacy turf protectors, the same ISP lobbyists that just gutted the FCC's privacy rules have a long and proud history of dismantling competitive threats at every conceivable opportunity, then paying legislators to look the other way. That includes pushing for protectionist state laws preventing towns and cities from doing much of anything about it. It's not clear who these ISPs thought they were speaking to in these editorials, but it's certainly not to folks that have actually paid attention to their behavior over the last fifteen years.

The EFF, meanwhile, concisely calls these ISPs' sudden and breathless dedication to privacy nonsense:

"There is a lot to say about the nonsense they've produced here," said Ernesto Falcon, legislative counsel at EFF. "There is little reason to believe they will not start using personal data they've been legally barred from using and selling to bidders without our consent now. The law will soon be tilted in their favor to do it."

Gosh, who to believe? Actual experts on subjects like security or privacy, or one of the more dishonest and anti-competitive business sectors in American industry? All told, you can expect these ISPs to remain on their best behavior for a short while for appearances' sake (and because AT&T wants its Time Warner merger approved) -- but it's not going to be long before they rush to abuse the lack of oversight their campaign contributions just successfully created. Anybody believing otherwise simply hasn't been paying attention to the laundry list of idiotic ISP actions that drove the FCC to try and pass the now-dismantled rules in the first place.

Permalink | Comments | Email This Story

Image/photo Image/photo
Image/photo


#Privacy #Net Neutrality #Communications #FCC #FTC #ATT #Comcast #Verizon #Lobbying #Corporatism #Politics @LibertyPod+ @Laissez-Faire Capitalism+ @Gadget Gurus+

Seth Martin
  
Techdirt.Techdirt. wrote the following post Wed, 05 Apr 2017 08:24:00 -0500

Comcast Paid Civil Rights Groups To Support Killing Broadband Privacy Rules

For years, one of the greasier lobbying and PR tactics by the telecom industry has been the use of minority groups to parrot awful policy positions. Historically, such groups are happy to take financing from a company like Comcast, in exchange for repeating whatever talking point memos are thrust in their general direction, even if the policy being supported may dramatically hurt their constituents. This strategy has played a starring role in supporting anti-consumer mega-mergers, killing attempts to make the cable box market more competitive, and efforts to eliminate net neutrality.

The goal is to provide an artificial wave of "support" for bad policies, used to then justify bad policy votes. And despite this being something the press has highlighted for the better part of several decades, the practice continues to work wonders. Hell, pretending to serve minority communities while effectively undermining them with bad internet policy is part of the reason Comcast now calls top lobbyist David Cohen the company's Chief Diversity Officer (something the folks at Comcast hate when I point it out, by the way).

Last week, we noted how Congress voted to kill relatively modest but necessary FCC privacy protections. You'd be hard pressed to find a single, financially-objective group or person that supports such a move. Even Donald Trump's most obnoxious supporters were relatively disgusted by the vote. Yet The Intercept notes that groups like the League of United Latin American Citizens and the OCA (Asian Pacific American Advocates) breathlessly urged the FCC to kill the rules, arguing that snoopvertising and data collection would be a great boon to low income families:

"The League of United Latin American Citizens and OCA – Asian Pacific American Advocates, two self-described civil rights organizations, told the FCC that “many consumers, especially households with limited incomes, appreciate receiving relevant advertising that is keyed to their interests and provides them with discounts on the products and services they use."

Of course, folks like Senator Ted Cruz then used this entirely-farmed support to insist there were "strenuous objections from throughout the internet community" at the creation of the rules, which simply wasn't true. Most people understood that the rules were a direct response to some reckless and irresponsible privacy practices at major ISPs -- ranging from charging consumers more to keep their data private, or using customer credit data to provide even worse customer support than they usually do. Yes, what consumer (minority or otherwise) doesn't want to pay significantly more money for absolutely no coherent reason?

It took only a little bit of digging for The Intercept to highlight what the real motivation for this support of anti-consumer policies was:

"OCA has long relied on telecom industry cash. Verizon and Comcast are listed as business advisory council members to OCA, and provide funding along with “corporate guidance to the organization.” Last year, both companies sponsored the OCA annual gala.

AT&T, Comcast, Time Warner Cable, Charter Communications and Verizon serve as part of the LULAC “corporate alliance,” providing “advice and assistance” to the group. Comcast gave $240,000 to LULAC between 2004 and 2012.

When a reporter asks these groups why they're supporting internet policies that run in stark contrast to their constituents, you'll usually be met with either breathless indignance at the idea that these groups are being used as marionettes, or no comment whatsoever (which was the case in the Intercept's latest report). This kind of co-opting still somehow doesn't get much attention in the technology press or policy circles, so it continues to work wonders. And it will continue to work wonders as the administration shifts its gaze from gutting privacy protections to killing net neutrality.

Permalink | Comments | Email This Story

Image/photo Image/photo
Image/photo


#Privacy #Net Neutrality #Communications #Comcast #FCC #Lobbying #LULAC #Politics @LibertyPod+ @Gadget Gurus+ @Laissez-Faire Capitalism+
Seth Martin
  
Yet this happens:
US internet providers pledge to not sell customer data after controversial rule change

The three major US Internet Service Providers (ISPs) Comcast Corp, Verizon Communications Inc, and AT&T Inc have pledged to protect the private data of US citizens in solidarity against the latest internet bill passed by Congress.

Seth Martin
  
It's probably not a good idea to store anything sensitive, private or potentially revealing at locations you don't own. Big data companies like this, keep your data forever! Choice is only an illusion.

Dropbox: Oops, yeah, we didn't actually delete all your files – this bug kept them in the cloud

Image/photo

Biz apologizes after years-old data mysteriously reappears
Dropbox says it was responsible for an attempted bug fix that instead caused old, deleted data to reappear on the site.…


#Dropbox #Cloud #Storage #Big Data @Gadget Guru+

Seth Martin
  
The Internet Health Report

Image/photo


Welcome to Mozilla’s new open source initiative to document and explain what’s happening to the health of the Internet. Combining research from multiple sources, we collect data on five key topics and offer a brief overview of each.


#Decentralization #Privacy #Internet #Security #Cybersecurity #Mozilla @LibertyPod+ @Gadget Guru+

Seth Martin
  
DeeplinksDeeplinks wrote the following post Thu, 29 Dec 2016 18:10:08 -0600

Secure Messaging Takes Some Steps Forward, Some Steps Back: 2016 In Review

This year has been full of developments in messaging platforms that employ encryption to protect users. 2016 saw an increase in the level of security for some major messaging services, bringing end-to-end encryption to over a billion people. Unfortunately, we’ve also seen major platforms making poor decisions for users and potentially undermining the strong cryptography built into their apps.

WhatsApp makes big improvements, but concerning privacy changes
In late March, the Facebook-owned messaging service WhatsApp introduced end-to-end encryption for its over 1 billion monthly active users.  The enormous significance of rolling out strong encryption to such a large user-base was combined with the fact that underlying Whatsapp’s new feature was the Signal Protocol, a well-regarded and independently reviewed encryption protocol. WhatsApp was not only protecting users’ chats, but also doing so with one of the best end-to-end encrypted messaging protocols out there. At the time, we praised WhatsApp and created a guide for both iOS and Android on how you could protect your communications using it.

In August, however, we were alarmed to see WhatsApp establish data-sharing practices that signaled a shift in its attitude toward user privacy. In its first privacy policy change since 2012, WhatsApp laid the groundwork for expanded data-sharing with its parent company, Facebook. This change allows Facebook access to several pieces of users’ WhatsApp information, including WhatsApp phone number, contact list, and usage data (e.g. when a user last used WhatsApp, what device it was used it on, and what OS it was run on). This new data-sharing compounded our previous concerns about some of WhatsApp’s non-privacy-friendly default settings.

Signal takes steps forward
Meanwhile, the well-regarded end-to-end encryption app Signal, for which the Signal Protocol was created, has grown its user-base and introduced new features.  Available for iOS and Android (as well as desktop if you have either of the previous two), Signal recently introduced disappearing messages to its platform.  With this, users can be assured that after a chosen amount of time, messages will be deleted from both their own and their contact’s devices.

Signal also recently changed the way users verify their communications, introducing the concept of “safety numbers” to authenticate conversations and verify the long-lived keys of contacts in a more streamlined way.

Mixed-mode messaging
2016  reminded us that it’s not as black-and-white as secure messaging apps vs. not-secure ones. This year we saw several existing players in the messaging space add end-to-end encrypted options to their platforms. Facebook Messenger added “secret” messaging, and Google released Allo Messenger with “incognito” mode. These end-to-end encrypted options co-exist on the apps with a default option that is only encrypted in transit.

Unfortunately, this “mixed mode” design may do more harm than good by teaching users the wrong lessons about encryption. Branding end-to-end encryption as “secret,” “incognito,” or “private” may encourage users to use end-to-end encryption only when they are doing something shady or embarrassing. And if end-to-end encryption is a feature that you only use when you want to hide or protect something, then the simple act of using it functions as a red flag for valuable, sensitive information. Instead, encryption should be an automatic, straightforward, easy-to-use status quo to protect all communications.

Further, mixing end-to-end encrypted modes with less sensitive defaults has been demonstrated to result in users making mistakes and inadvertently sending sensitive messages without end-to-end encryption.

In contrast, the end-to-end encrypted “letter sealing” that LINE expanded this year is enabled by default. Since first introducing it for 1-on-1 chats in 2015, LINE has made end-to-end encryption the default and progressively expanded the feature to group chats and 1-on-1 calls. Users can still send messages on LINE without end-to-end encryption by changing security settings, but the company recommends leaving the default “letter sealing” enabled at all times. This kind of default design makes it easier for users to communicate with encryption from the get-go, and much more difficult for them to make dangerous mistakes.

The dangers of unsecure messaging
In stark contrast to the above-mentioned secure messaging apps, a November report from Citizen Lab exposes China’s WeChat messenger’s practice of performing selective censorship on its over 806 million monthly active users.  When a user registers with a Chinese phone number, WeChat will censor content critical of the regime no matter where that user is. The censorship effectively “follows them around,” even if the user switches to an international phone number or leaves China to travel abroad. Effectively, WeChat users may be under the control of China’s censorship regime no matter where they go.

Compared to the secure messaging practices EFF advocates for, WeChat represents the other end of the messaging spectrum, employing algorithms to control and limit access rather than using privacy-enhancing technologies to allow communication. This is an urgent reminder of how users can be put in danger when their communications are available to platform providers and governments, and why it is so important to continue promoting privacy-enhancing technologies and secure messaging.

This article is part of our Year In Review series. Read other articles about the fight for digital rights in 2016.

Like what you're reading? Support digital freedom defense today!
Image/photo

Share this: Image/photo Image/photo Image/photo Image/photo Join EFF


#Encryption #Privacy #Communications #Messaging #Security #WhatsApp #Signal #LINE #Allo #incognito  
@Gadget Guru+ @LibertyPod+
Mike Macgirvin
  
I tend to disagree about mixed mode messaging. We need a range of communication tools, from hush-hush ultra top secret to public and open. Both ends of the spectrum have problems. That's why you need privacy.
Seth Martin
  last edited: Mon, 02 Jan 2017 10:46:52 -0600  
I agree with you, Mike. I just think it's important for these messaging apps to have encryption on by default to curb authorities targeting those that use the feature selectively.
Fabián Bonetti
 
Mike por que debo salir de mi serviddor para responderte?

Seth Martin
  last edited: Tue, 11 Oct 2016 12:58:51 -0500  
We use #Hubzilla at my workplace so our data remains our data!
I'm also considering introducing the team to Riot/matrix for a Slack/IRC like experience.

MotherboardMotherboard wrote the following post Tue, 11 Oct 2016 11:45:00 -0500

Facebook's Version of Slack Is Coming for Your Workplace. What Now?

Image/photo

Sitting at work all day scrolling through Facebook is almost definitely frowned upon by your bosses, but Facebook wants to change that with the launch of a new version of Facebook—specifically designed for work—called Workplace.

Facebook is ubiquitous. If it’s not Mark Zuckerberg handing out “Free Basics” to developing countries, it’s internet connectivity beamed down from giant, solar-powered drones. As of July 2016, the social network had 1.71 billion monthly users. Facebook is without doubt one of the most pervasive technological phenomenons of the 21st Century. Thing is, Facebook’s hit a brick wall when it comes to growth. Everybody who would want to use Facebook, generally speaking, is already, or at least will be using Facebook very soon. So, to eke out the last embers of growth in a saturated market, Facebook has now, officially, entered your workplace.

Workplace by Facebook launched on Monday October 10 after almost two years of development and months of beta tests on early customers. The service is the social giant’s new effort to infiltrate businesses around the world, and to rival office apps like Slack and Microsoft’s Yammer. Essentially, it’s a modified version of the Facebook we all know and love/hate. It’s the same algorithms, the same news feeds, the same ability to share photos and documents and chat in groups or in private—only your bosses can see everything that happens and it’s all controlled by your company’s IT team. Workplace is on mobile, too, with standalone apps for Android and iOS meaning employees can access everything remotely, just like users would with the regular Facebook app.

Facebook, with Workplace, is hoping to revolutionise how companies want to work with employees by shedding the old ideals of emails and intranet. “It's for everyone, not just for one team, not just for five percent of the company, it's for everyone from the CEO to the factory workers to the baristas in the coffee shop,” a Facebook spokesperson said at the London launch event this week, which Motherboard attended. “Even people who don't have a desk, even people who have never had a PC, even people who have never had an email.”

Image/photo

Image: Workplace by Facebook

The question is, to what extent will this horizontal workflow management clash with privacy concerns? If your team or company decides to implement Workplace, will signing up be compulsory? It would seem so, if Facebook has its way and truly lets your bosses ditch emails and intranet and all of the inner workings of PC-based bureaucracy. But then what?

The Facebook spokesperson at the launch event said it best when he was explaining how the chief information officer of an airline wanted to be able to see what his staff were doing in their personal, consumer versions of Facebook groups. “Every crew of every flight were using Facebook groups,” the spokesperson said. “It's not necessarily what the CIO of the company wanted, because he wants to control who sees the information.”

But the reason why many organisations will be attracted to Workplace, such as the familiarity employees will have with regular old Facebook, could also be its downfall. Employees will be accustomed to Facebook being a place for gossip, cat videos, and friends. So what’s the decorum for Workplace by Facebook? While the two are completely different applications, old habits die hard. Who can you trust to speak to in private? Is my group being monitored for productivity? Do I have to befriend everyone in the company, and if I block someone’s news feed, will my boss know I hate them?
Your workplace chats may well one day be used as evidence against you

It’s also worth noting, as highlighted in the Gawker vs Hulk Hogan case, in which Gawker Media’s Slack conversations were subpoenaed for court, that your workplace chats may well one day be used as evidence against you. While data on Workplace belongs to the company using it, rather than Facebook, it’s still wise to watch what you say with any office productivity app. Facebook did not immediately respond to Motherboard’s request for comment on whether workplace chats would be susceptible to subpoenas.

Ultimately, Facebook is banking on the familiarity of the platform winning over customers. It’s appears easy to use and offers all of the same features as regular Facebook. But in the end, only time will tell whether employees will ever be, or ever want to be, comfortable using Facebook as a work tool or not.


#CCF #Facebook #Social Networking #Communications #Privacy @Gadget Guru+
Fabio
  
Problem with SpiderOak products is that while are nice in theory, no source is avaiable... so you must trust their words...
Manuel
  
We use #Hubzilla at my workplace so our data remains our data!

:like
Manuel
  
Image/photo
Seth Martin
  last edited: Tue, 11 Oct 2016 13:07:51 -0500  
Mike MacgirvinMike Macgirvin wrote the following post Thu, 08 Sep 2016 04:16:36 -0500
If you know folks who use Facebook and 'logout' regularly to prevent tracking or prevent 'haha I hacked your FB account', I have it on good authority that in the last few days this (logout) has been rendered useless. You are now always logged in. Logging out and visiting a facebook page presents a dialogue to re-connect the last session. Dismissing the dialogue without acting on it actually re-connects the previous session with full access to the "logged out" account. My own investigation suggests that removing all facebook.com cookies might actually log you out, but if this information becomes widely known, they'll just attach a cookie from some obscure domain that you won't be looking for. It's not like they can't afford to buy a domain name.  

Granted there are probably less than a dozen people in the world who logout of Facebook, but if you know any of these people please pass the word along.


#Facebook #Privacy
Mike Macgirvin
  
Not at all. That's a pretty sensible approach. I wasn't aware that FB had an onion friendly service, so thanks for that bit of knowledge.
Michael Meer
  last edited: Sat, 10 Sep 2016 05:49:24 -0500  
You may call it Onion friendly, I call it a trap, cause you have to authenticate. And FB offers saml based Single Sign on.
Your activities in tor may be tracked. Please don't forget they work together with lots of US agencies and you are they product not the customer. It might be possible that your data are more worth when you use tor.

I just can make some guesses, don't know what really happens in the background. But usage of Facebook through tor might harm your privacy even when you use the tor browser. So be careful, think about it and make your decision.
Seth Martin
  
Seth Martin tagged Seth Martin's post with ⋕CCF

Seth Martin
  
DeeplinksDeeplinks wrote the following post Wed, 17 Aug 2016 09:12:52 -0500

With Windows 10, Microsoft Blatantly Disregards User Choice and Privacy: A Deep Dive

Image/photo


Microsoft had an ambitious goal with the launch of Windows 10: a billion devices running the software by the end of 2018. In its quest to reach that goal, the company aggressively pushed Windows 10 on its users and went so far as to offer free upgrades for a whole year. However, the company’s strategy for user adoption has trampled on essential aspects of modern computing: user choice and privacy. We think that’s wrong.

You don’t need to search long to come across stories of people who are horrified and amazed at just how far Microsoft has gone in order to increase Windows 10’s install base. Sure, there is some misinformation and hyperbole, but there are also some real concerns that current and future users of Windows 10 should be aware of. As the company is currently rolling out its “Anniversary Update” to Windows 10, we think it’s an appropriate time to focus on and examine the company’s strategy behind deploying Windows 10.

Disregarding User Choice

The tactics Microsoft employed to get users of earlier versions of Windows to upgrade to Windows 10 went from annoying to downright malicious. Some highlights: Microsoft installed an app in users’ system trays advertising the free upgrade to Windows 10. The app couldn’t be easily hidden or removed, but some enterprising users figured out a way. Then, the company kept changing the app and bundling it into various security patches, creating a cat-and-mouse game to uninstall it.

Eventually, Microsoft started pushing Windows 10 via its Windows Update system. It started off by pre-selecting the download for users and downloading it on their machines. Not satisfied, the company eventually made Windows 10 a recommended update so users receiving critical security updates were now also downloading an entirely new operating system onto their machines without their knowledge. Microsoft even rolled in the Windows 10 ad as part of an Internet Explorer security patch. Suffice to say, this is not the standard when it comes to security updates, and isn’t how most users expect them to work. When installing security updates, users expect to patch their existing operating system, and not see an advertisement or find out that they have downloaded an entirely new operating system in the process.

In May 2016, in an action designed in a way we think was highly deceptive, Microsoft actually changed the expected behavior of a dialog window, a user interface element that’s been around and acted the same way since the birth of the modern desktop. Specifically, when prompted with a Windows 10 update, if the user chose to decline it by hitting the ‘X’ in the upper right hand corner, Microsoft interpreted that as consent to download Windows 10.

Time after time, with each update, Microsoft chose to employ questionable tactics to cause users to download a piece of software that many didn’t want. What users actually wanted didn’t seem to matter. In an extreme case, members of a wildlife conservation group in the African jungle felt that the automatic download of Windows 10 on a limited bandwidth connection could have endangered their lives if a forced upgrade had begun during a mission.

Disregarding User Privacy

The trouble with Windows 10 doesn’t end with forcing users to download the operating system. By default, Windows 10 sends an unprecedented amount of usage data back to Microsoft, and the company claims most of it is to “personalize” the software by feeding it to the OS assistant called Cortana. Here’s a non-exhaustive list of data sent back: location data, text input, voice input, touch input, webpages you visit, and telemetry data regarding your general usage of your computer, including which programs you run and for how long.

While we understand that many users find features like Cortana useful, and that such features would be difficult (though not necessarily impossible) to implement in a way that doesn’t send data back to the cloud, the fact remains that many users would much prefer to opt out of these features in exchange for maintaining their privacy.

And while users can opt-out of some of these settings, it is not a guarantee that your computer will stop talking to Microsoft’s servers. A significant issue is the telemetry data the company receives. While Microsoft insists that it aggregates and anonymizes this data, it hasn’t explained just how it does so. Microsoft also won’t say how long this data is retained, instead providing only general timeframes. Worse yet, unless you’re an enterprise user, no matter what, you have to share at least some of this telemetry data with Microsoft and there’s no way to opt-out of it.

Microsoft has tried to explain this lack of choice by saying that Windows Update won’t function properly on copies of the operating system with telemetry reporting turned to its lowest level. In other words, Microsoft is claiming that giving ordinary users more privacy by letting them turn telemetry reporting down to its lowest level would risk their security since they would no longer get security updates1. (Notably, this is not something many articles about Windows 10 have touched on.)

But this is a false choice that is entirely of Microsoft’s own creation. There’s no good reason why the types of data Microsoft collects at each telemetry level couldn’t be adjusted so that even at the lowest level of telemetry collection, users could still benefit from Windows Update and secure their machines from vulnerabilities, without having to send back things like app usage data or unique IDs like an IMEI number.

And if this wasn’t bad enough, Microsoft’s questionable upgrade tactics of bundling Windows 10 into various levels of security updates have also managed to lower users’ trust in the necessity of security updates. Sadly, this has led some people to forego security updates entirely, meaning that there are users whose machines are at risk of being attacked.

There’s no doubt that Windows 10 has some great security improvements over previous versions of the operating system. But it’s a shame that Microsoft made users choose between having privacy and security.

The Way Forward

Microsoft should come clean with its user community. The company needs to acknowledge its missteps and offer real, meaningful opt-outs to the users who want them, preferably in a single unified screen. It also needs to be straightforward in separating security updates from operating system upgrades going forward, and not try to bypass user choice and privacy expectations.

Otherwise it will face backlash in the form of individual lawsuits, state attorney general investigations, and government investigations.

We at EFF have heard from many users who have asked us to take action, and we urge Microsoft to listen to these concerns and incorporate this feedback into the next release of its operating system. Otherwise, Microsoft may find that it has inadvertently discovered just how far it can push its users before they abandon a once-trusted company for a better, more privacy-protective solution.
  • 1. Confusingly, Microsoft calls the lowest level of telemetry reporting (which is not available on Home or Professional editions of Windows 10) the “security” level—even though it prevents security patches from being delivered via Windows Update.
Share this: Image/photo Image/photo Image/photo Image/photo Join EFF


#Privacy #Security #Microsoft #Windows #Cybersecurity @Gadget Guru+ @LibertyPod+
kris
  
My main OS at home is kubuntu.

Seth Martin
  last edited: Sun, 07 Aug 2016 10:14:09 -0500  
Edward Snowden Not Dead: ‘He’s Fine’ Says Glenn Greenwald After Mysterious Tweet

Image/photo

Snowden issued a cryptic 64-character code via Twitter leading to concern that the whistleblower was captured or killed triggering a "dead man’s switch" message designed to release if he didn’t check into his computer at a certain time.


#Snowden #Whistleblowing #Privacy @Gadget Guru+  @LibertyPod+
David
 from Diaspora
Output from sha256sum, almost certainly. He's verifying receipt of a file.

Seth Martin
  last edited: Fri, 29 Jul 2016 18:39:58 -0500  
A little late but don't fall for the "Free" "Upgrade" to Windows 10 folks, it's actually a downgrade. More bait and switch from Microsoft. Not free because you pay with your formerly private data and are forced to see ads. Microsoft made it sound like an upgrade but it turns out that in Windows 10, the Enterprise edition is what you'll have to buy to get back the control that you once had.

Windows 10 Pro Anniversary Update tweaked to stop you disabling app promos

Image/photo




Group Policy changes require Enterprise or Education edition.
Group Policy changes in Windows 10 Anniversary Update, set for release shortly, mean that users of the Pro edition can no longer disable some of the more intrusive aspects of the operating system.…


#Microsoft #Windows #Bait #Privacy @Gadget Guru+
Mike Macgirvin
  
*If* Erik doesn't have Diaspora enabled, his comment will come across as a reshare (by the conversation owner) on the Diaspora side, since without Diaspora protocol support, he can't sign Diaspora XML fields before sending the comment.

Liking a comment isn't supported on the Diaspora side and this should come across on that network as an activity by you. "Seth liked Erik's comment".
Erik Lundin
  
In my settings the option "Enable the (experimental) Diaspora protocol for this channel" is disabled.
Seth Martin
  
I was finally able to test with diaspora and it works as @Mike Macgirvin describes. It's only in Friendica that the Like appears as a reshare in a top-level post.

Seth Martin
  last edited: Thu, 09 Jun 2016 18:29:25 -0500  
Sure, it isn't malware that's designed with a malicious purpose. It's not being installed on your computer with the aim of stealing your data

According to the default privacy settings, that statement is false.

How Windows 10 became malware

Image/photo

Any software — even a premier operating system — that gets onto computers through stealth means has crossed over to the dark side.


#Microsoft #Windows #Malware #Spyware #Privacy @Gadget Guru+
Seth Martin
  
I need to learn more about Windows 10 for reasons similar to Mike's. My plan is to use Virtual Box but I'm not yet sure Microsoft's activation will allow that.

My Laptop's Windows 7 partition has the Enterprise edition installed and it's not affected by this particular Microsoft Malware.
Seth Martin
  
Windows 10 upgrade will soon be easier to reject

Image/photo

Pressing the X will close the window, as it should.
Marshall Sutherland
  
If nothing else, after July 29th, it should stop (unless they extend the opportunity for the free upgrade period).

Or, maybe they will dig around your computer for credit card info, charge you, then give you the upgrade.
Seth Martin
  
This bill would make several pieces of software that I use, Illegal. Even the software running the website you're viewing right now would be illegal.

'Leaked' Burr-Feinstein Encryption Bill Is a Threat to American Privacy

Image/photo


Every service, person, human rights worker, protester, reporter, company—the list goes on—will be easier to spy on.


#Privacy #Surveillance #Encryption #Freedom #Liberty @LibertyPod+ @Gadget Guru+
Mike Macgirvin
  
I don't know that it has been physically introduced, by Feinstein is pretty tenacious. She'll keep refinng it and wait for some high profile event to act as a trigger/catalyst and then ram it through in the aftermath when the opposition is on the defensive. I suspect she also leaks her own bills. That way all the furor erupts way ahead of schedule and by the time the bill is actually introduced, nobody can mobilise opposition because by then it's old news.
Marshall Sutherland
  
Here is my answer, I guess... They are still beating the drum for this abomination.

The Lawmakers Who Control Your Digital Future Are Clueless About Technology

It is becoming increasingly clear that Senators Dianne Feinstein and Richard Burr, co-chairs of the Senate Intelligence Committee, don’t have the slightest clue about how encryption works. Good thing they’re currently pushing disastrous legislation that would force tech companies to decrypt things for law enforcement!

Today Feinstein and Burr co-authored an op-ed in the Wall Street Journal entitled “Encryption Without Tears,” and wow, it is bad. They have yet again demonstrated a failure to grasp even the most basic principles of technology.

Seth Martin
  
The InterceptThe Intercept wrote the following post Thu, 19 Feb 2015 13:25:38 -0600
How Spies Stole the Keys to the Encryption Castle

AMERICAN AND BRITISH spies hacked into the internal computer network of the largest manufacturer of SIM cards in the world, stealing encryption keys used to protect the privacy of cellphone communications across the globe, according to top-secret documents provided to The Intercept by National Security Agency whistleblower Edward Snowden.

The hack was perpetrated by a joint unit consisting of operatives from the NSA and its British counterpart Government Communications Headquarters, or GCHQ. The breach, detailed in a secret 2010 GCHQ document, gave the surveillance agencies the potential to secretly monitor a large portion of the world’s cellular communications, including both voice and data.

The company targeted by the intelligence agencies, Gemalto, is a multinational firm incorporated in the Netherlands that makes the chips used in mobile phones and next-generation credit cards. Among its clients are AT&T, T-Mobile, Verizon, Sprint and some 450 wireless network providers around the world. The company operates in 85 countries and has more than 40 manufacturing facilities. One of its three global headquarters is in Austin, Texas and it has a large factory in Pennsylvania.

In all, Gemalto produces some 2 billion SIM cards a year. Its motto is “Security to be Free.”

With these stolen encryption keys, intelligence agencies can monitor mobile communications without seeking or receiving approval from telecom companies and foreign governments. Possessing the keys also sidesteps the need to get a warrant or a wiretap, while leaving no trace on the wireless provider’s network that the communications were intercepted. Bulk key theft additionally enables the intelligence agencies to unlock any previously encrypted communications they had already intercepted, but did not yet have the ability to decrypt.

As part of the covert operations against Gemalto, spies from GCHQ — with support from the NSA — mined the private communications of unwitting engineers and other company employees in multiple countries.

Gemalto was totally oblivious to the penetration of its systems — and the spying on its employees. “I’m disturbed, quite concerned that this has happened,” Paul Beverly, a Gemalto executive vice president, told The Intercept. “The most important thing for me is to understand exactly how this was done, so we can take every measure to ensure that it doesn’t happen again, and also to make sure that there’s no impact on the telecom operators that we have served in a very trusted manner for many years. What I want to understand is what sort of ramifications it has, or could have, on any of our customers.” He added that “the most important thing for us now is to understand the degree” of the breach.

Leading privacy advocates and security experts say that the theft of encryption keys from major wireless network providers is tantamount to a thief obtaining the master ring of a building superintendent who holds the keys to every apartment. “Once you have the keys, decrypting traffic is trivial,” says Christopher Soghoian, the principal technologist for the American Civil Liberties Union. “The news of this key theft will send a shock wave through the security community.”

The massive key theft is “bad news for phone security. Really bad news.”


Beverly said that after being contacted by The Intercept, Gemalto’s internal security team began on Wednesday to investigate how their system was penetrated and could find no trace of the hacks. When asked if the NSA or GCHQ had ever requested access to Gemalto-manufactured encryption keys, Beverly said, “I am totally unaware. To the best of my knowledge, no.”

According to one secret GCHQ slide, the British intelligence agency penetrated Gemalto’s internal networks, planting malware on several computers, giving GCHQ secret access. We “believe we have their entire network,” the slide’s author boasted about the operation against Gemalto.

Additionally, the spy agency targeted unnamed cellular companies’ core networks, giving it access to “sales staff machines for customer information and network engineers machines for network maps.” GCHQ also claimed the ability to manipulate the billing servers of cell companies to “suppress” charges in an effort to conceal the spy agency’s secret actions against an individual’s phone. Most significantly, GCHQ also penetrated “authentication servers,” allowing it to decrypt data and voice communications between a targeted individual’s phone and their telecom provider’s network. A note accompanying the slide asserted that the spy agency was “very happy with the data so far and [was] working through the vast quantity of product.”

The Mobile Handset Exploitation Team (MHET), whose existence has never before been disclosed, was formed in April 2010 to target vulnerabilities in cell phones. One of its main missions was to covertly penetrate computer networks of corporations that manufacture SIM cards, as well as those of wireless network providers. The team included operatives from both GCHQ and the NSA.

While the FBI and other U.S. agencies can obtain court orders compelling U.S.-based telecom companies to allow them to wiretap or intercept the communications of their customers, on the international front this type of data collection is much more challenging. Unless a foreign telecom or foreign government grants access to their citizens’ data to a U.S. intelligence agency, the NSA or CIA would have to hack into the network or specifically target the user’s device for a more risky “active” form of surveillance that could be detected by sophisticated targets. Moreover, foreign intelligence agencies would not allow U.S. or U.K. spy agencies access to the mobile communications of their heads of state or other government officials.

“It’s unbelievable. Unbelievable,” said Gerard Schouw, a member of the Dutch Parliament when told of the spy agencies’ actions. Schouw, the intelligence spokesperson for D66, the largest opposition party in the Netherlands, told The Intercept, “We don’t want to have the secret services from other countries doing things like this.” Schouw added that he and other lawmakers will ask the Dutch government to provide an official explanation and to clarify whether the country’s intelligence services were aware of the targeting of Gemalto, whose official headquarters is in Amsterdam.

Last November, the Dutch government amended its constitution to include explicit protection for the privacy of digital communications, including those made on mobile devices. “We have, in the Netherlands, a law on the [activities] of secret services. And hacking is not allowed,” he said. Under Dutch law, the interior minister would have to sign off on such operations by foreign governments’ intelligence agencies. “I don’t believe that he has given his permission for these kind of actions.”

The U.S. and British intelligence agencies pulled off the encryption key heist in great stealth, giving them the ability to intercept and decrypt communications without alerting the wireless network provider, the foreign government or the individual user that they have been targeted. “Gaining access to a database of keys is pretty much game over for cellular encryption,” says Matthew Green, a cryptography specialist at the Johns Hopkins Information Security Institute. The massive key theft is “bad news for phone security. Really bad news.”

Image/photo



AS CONSUMERS BEGAN to adopt cellular phones en masse in the mid-1990s, there were no effective privacy protections in place. Anyone could buy a cheap device from RadioShack capable of intercepting calls placed on mobile phones. The shift from analog to digital networks introduced basic encryption technology, though it was still crackable by tech savvy computer science graduate students, as well as the FBI and other law enforcement agencies, using readily available equipment.

Today, second-generation (2G) phone technology, which relies on a deeply flawed encryption system, remains the dominant platform globally, though U.S. and European cell phone companies now use 3G, 4G and LTE technology in urban areas. These include more secure, though not invincible, methods of encryption, and wireless carriers throughout the world are upgrading their networks to use these newer technologies.

It is in the context of such growing technical challenges to data collection that intelligence agencies, such as the NSA, have become interested in acquiring cellular encryption keys. “With old-fashioned [2G], there are other ways to work around cellphone security without those keys,” says Green, the Johns Hopkins cryptographer. “With newer 3G, 4G and LTE protocols, however, the algorithms aren’t as vulnerable, so getting those keys would be essential.”

The privacy of all mobile communications — voice calls, text messages and internet access — depends on an encrypted connection between the cell phone and the wireless carrier’s network, using keys stored on the SIM, a tiny chip smaller than a postage stamp which is inserted into the phone. All mobile communications on the phone depend on the SIM, which stores and guards the encryption keys created by companies like Gemalto. SIM cards can be used to store contacts, text messages, and other important data, like one’s phone number. In some countries, SIM cards are used to transfer money. As The Intercept reported last year, having the wrong SIM card can make you the target of a drone strike.

SIM cards were not invented to protect individual communications — they were designed to do something much simpler: ensure proper billing and prevent fraud, which was pervasive in the early days of cell phones. Soghoian compares the use of encryption keys on SIM cards to the way Social Security numbers are used today. “Social security numbers were designed in the 1930s to track your contributions to your government pension,” he says. “Today they are used as a quasi national identity number, which was never their intended purpose.”

Because the SIM card wasn’t created with call confidentiality in mind, the manufacturers and wireless carriers don’t make a great effort to secure their supply chain. As a result, the SIM card is an extremely vulnerable component of a mobile phone. “I doubt anyone is treating those things very carefully,” says Green. “Cell companies probably don’t treat them as essential security tokens. They probably just care that nobody is defrauding their networks.” The ACLU’s Soghoian adds, “These keys are so valuable that it makes sense for intel agencies to go after them.”

As a general rule, phone companies do not manufacture SIM cards, nor program them with secret encryption keys. It is cheaper and more efficient for them to outsource this sensitive step in the SIM card production process. They purchase them in bulk with the keys pre-loaded by other corporations. Gemalto is the largest of these SIM “personalization” companies.

After a SIM card is manufactured, the encryption key, known as a “Ki,” is burned directly onto the chip. A copy of the key is also given to the cellular provider, allowing its network to recognize an individual’s phone. In order for the phone to be able to connect to the wireless carriers’ network, the phone — with the help of the SIM — authenticates itself using the Ki that has been programmed onto the SIM. The phone conducts a secret “handshake” that validates that the Ki on the SIM matches the Ki held by the mobile company. Once that happens, the communications between the phone and the network are encrypted. Even if GCHQ or the NSA were to intercept the phone signals as they are transmitted through the air, the intercepted data would be a garbled mess. Decrypting it can be challenging and time-consuming. Stealing the keys, on the other hand, is beautifully simple, from the intelligence agencies’ point of view, as the pipeline for producing and distributing SIM cards was never designed to thwart mass surveillance efforts.

One of the creators of the encryption protocol that is widely used today for securing emails, Adi Shamir, famously asserted: “Cryptography is typically bypassed, not penetrated.” In other words, it is much easier (and sneakier) to open a locked door when you have the key than it is to break down the door using brute force. While the NSA and GCHQ have substantial resources dedicated to breaking encryption, it is not the only way — and certainly not always the most efficient — to get at the data they want. “NSA has more mathematicians on its payroll than any other entity in the U.S.,” says the ACLU’s Soghoian. “But the NSA’s hackers are way busier than its mathematicians.”

GCHQ and the NSA could have taken any number of routes to steal SIM encryption keys and other data. They could have physically broken into a manufacturing plant. They could have broken into a wireless carrier’s office. They could have bribed, blackmailed or coerced an employee of the manufacturer or cell phone provider. But all of that comes with substantial risk of exposure. In the case of Gemalto, hackers working for GCHQ remotely penetrated the company’s computer network in order to steal the keys in bulk as they were en route to the wireless network providers.

SIM card “personalization” companies like Gemalto ship hundreds of thousands of SIM cards at a time to mobile phone operators across the world. International shipping records obtained by The Intercept show that in 2011, Gemalto shipped 450,000 smart cards from its plant in Mexico to Germany’s Deutsche Telekom in just one shipment.

In order for the cards to work and for the phones’ communications to be secure, Gemalto also needs to provide the mobile company with a file containing the encryption keys for each of the new SIM cards. These master key files could be shipped via FedEx, DHL, UPS or another snail mail provider. More commonly, they could be sent via email or through File Transfer Protocol, FTP, a method of sending files over the internet.

The moment the master key set is generated by Gemalto or another personalization company, but before it is sent to the wireless carrier, is the most vulnerable moment for interception. “The value of getting them at the point of manufacture is you can presumably get a lot of keys in one go, since SIM chips get made in big batches,” says Green, the cryptographer. “SIM cards get made for lots of different carriers in one facility.” In Gemalto’s case, GCHQ hit the jackpot, as the company manufactures SIMs for hundreds of wireless network providers, including all of the leading U.S. — and many of the largest European — companies.

But obtaining the encryption keys while Gemalto still held them required finding a way into the company’s internal systems.

Image/photo Diagram from a top-secret GCHQ slide.

TOP-SECRET GCHQ documents reveal that the intelligence agencies accessed the email and Facebook accounts of engineers and other employees of major telecom corporations and SIM card manufacturers in an effort to secretly obtain information that could give them access to millions of encryption keys. They did this by utilizing the NSA’s X-KEYSCORE program, which allowed them access to private emails hosted by the SIM card and mobile companies’ servers, as well as those of major tech corporations, including Yahoo! and Google.

In effect, GCHQ clandestinely cyberstalked Gemalto employees, scouring their emails in an effort to find people who may have had access to the company’s core networks and Ki-generating systems. The intelligence agency’s goal was to find information that would aid in breaching Gemalto’s systems, making it possible to steal large quantities of encryption keys. The agency hoped to intercept the files containing the keys as they were transmitted between Gemalto and its wireless network provider customers.

GCHQ operatives identified key individuals and their positions within Gemalto and then dug into their emails. In one instance, GCHQ zeroed in on a Gemalto employee in Thailand who they observed sending PGP-encrypted files, noting that if GCHQ wanted to expand its Gemalto operations, “he would certainly be a good place to start.” They did not claim to have decrypted the employee’s communications, but noted that the use of PGP could mean the contents were potentially valuable.

The cyberstalking was not limited to Gemalto. GCHQ operatives wrote a script that allowed the agency to mine the private communications of employees of major telecommunications and SIM “personalization” companies for technical terms used in the assigning of secret keys to mobile phone customers. Employees for the SIM card manufacturers and wireless network providers were labeled as “known individuals and operators targeted” in a top-secret GCHQ document.

According to that April 2010 document, “PCS Harvesting at Scale,” hackers working for GCHQ focused on “harvesting” massive amounts of individual encryption keys “in transit between mobile network operators and SIM card personalisation centres” like Gemalto. The spies “developed a methodology for intercepting these keys as they are transferred between various network operators and SIM card providers.” By that time, GCHQ had developed “an automated technique with the aim of increasing the volume of keys that can be harvested.”

The PCS Harvesting document acknowledged that, in searching for information on encryption keys, GCHQ operatives would undoubtedly vacuum up “a large number of unrelated items” from the private communications of targeted employees. “[H]owever an analyst with good knowledge of the operators involved can perform this trawl regularly and spot the transfer of large batches of [keys].”

The document noted that many SIM card manufacturers transferred the encryption keys to wireless network providers “by email or FTP with simple encryption methods that can be broken…or occasionally with no encryption at all.” To get bulk access to encryption keys, all the NSA or GCHQ needed to do was intercept emails or file transfers as they were sent over the internet — something both agencies already do millions of times per day. A footnote in the 2010 document observed that the use of “strong encryption products…is becoming increasingly common” in transferring the keys.

In its key harvesting “trial” operations in the first quarter of 2010, GCHQ successfully intercepted keys used by wireless network providers in Iran, Afghanistan, Yemen, India, Serbia, Iceland and Tajikistan. But, the agency noted, its automated key harvesting system failed to produce results against Pakistani networks, denoted as “priority targets” in the document, despite the fact that GCHQ had a store of Kis from two providers in the country, Mobilink and Telenor. “t is possible that these networks now use more secure methods to transfer Kis,” the document concluded.

From December 2009 through March 2010, a month before the Mobile Handset Exploitation Team was formed, GCHQ conducted a number of trials aimed at extracting encryption keys and other personalized data for individual phones. In one two-week period, they accessed the emails of 130 people associated with wireless network providers or SIM card manufacturing and personalization. This operation produced nearly 8,000 keys matched to specific phones in 10 countries. In another two-week period, by mining just 6 email addresses, they produced 85,000 keys. At one point in March 2010, GCHQ intercepted nearly 100,000 keys for mobile phone users in Somalia. By June, they’d compiled 300,000. “Somali providers are not on GCHQ’s list of interest,” the document noted. “[H]owever, this was usefully shared with NSA.”

The GCHQ documents only contain statistics for three months of encryption key theft in 2010. During this period, millions of keys were harvested. The documents stated explicitly that GCHQ had already created a constantly evolving automated process for bulk harvesting of keys. They describe active operations targeting Gemalto’s personalization centers across the globe, as well as other major SIM card manufacturers and the private communications of their employees.

A top-secret NSA document asserted that, as of 2009, the U.S. spy agency already had the capacity to process between 12 and 22 million keys per second for later use against surveillance targets. In the future, the agency predicted, it would be capable of processing more than 50 million per second. The document did not state how many keys were actually processed, just that the NSA had the technology to perform such swift, bulk operations. It is impossible to know how many keys have been stolen by the NSA and GCHQ to date, but, even using conservative math, the numbers are likely staggering.

GCHQ assigned “scores” to more than 150 individual email addresses based on how often the users mentioned certain technical terms, and then intensified the mining of those individuals’ accounts based on priority. The highest scoring email address was that of an employee of Chinese tech giant Huawei, which the U.S. has repeatedly accused of collaborating with Chinese intelligence. In all, GCHQ harvested the emails of employees of hardware companies that manufacture phones, such as Ericsson and Nokia; operators of mobile networks, such as MTN Irancell and Belgacom; SIM card providers, such as Bluefish and Gemalto; and employees of targeted companies who used email providers such as Yahoo! and Google. During the three-month trial, the largest number of email addresses harvested were those belonging to Huawei employees, followed by MTN Irancell. The third largest class of emails harvested in the trial were private Gmail accounts, presumably belonging to employees at targeted companies.

The GCHQ program targeting Gemalto was called DAPINO GAMMA. In 2011, GCHQ launched operation HIGHLAND FLING to mine the email accounts of Gemalto employees in France and Poland. A top-secret document on the operation stated that one of the aims was “getting into French HQ” of Gemalto “to get in to core data repositories.” France, home to one of Gemalto’s global headquarters, is the nerve center of the company’s worldwide operations. Another goal was to intercept private communications of employees in Poland that “could lead to penetration into one or more personalisation centers” — the factories where the encryption keys are burned onto SIM cards.

As part of these operations, GCHQ operatives acquired the usernames and passwords for Facebook accounts of Gemalto targets. An internal top-secret GCHQ wiki on the program from May 2011 indicated that GCHQ was in the process of “targeting” more than a dozen Gemalto facilities across the globe, including in Germany, Mexico, Brazil, Canada, China, India, Italy, Russia, Sweden, Spain, Japan and Singapore.

The document also stated that GCHQ was preparing similar key theft operations against one of Gemalto’s competitors, Germany-based SIM card giant Giesecke and Devrient.

On January 17, 2014, President Barack Obama gave a major address on the NSA spying scandal. “The bottom line is that people around the world, regardless of their nationality, should know that the United States is not spying on ordinary people who don’t threaten our national security and that we take their privacy concerns into account in our policies and procedures,” he said.

The monitoring of the lawful communications of employees of major international corporations shows that such statements by Obama, other U.S. officials and British leaders — that they only intercept and monitor the communications of known or suspected criminals or terrorists — were untrue. “The NSA and GCHQ view the private communications of people who work for these companies as fair game,” says the ACLU’s Soghoian. “These people were specifically hunted and targeted by intelligence agencies, not because they did anything wrong, but because they could be used as a means to an end.”

Image/photo



THERE ARE TWO basic types of electronic or digital surveillance: passive and active. All intelligence agencies engage in extensive passive surveillance, which means they collect bulk data by intercepting communications sent over fiber optic cables, radio waves or wireless devices.

Intelligence agencies place high power antennas, known as “spy nests,” on the top of their countries’ embassies and consulates, which are capable of vacuuming up data sent to or from mobile phones in the surrounding area. The joint NSA/CIA Special Collection Service is the lead entity that installs and mans these nests for the United States. An embassy situated near a parliament or government agency could easily intercept the phone calls and data transfers of the mobile phones used by foreign government officials. The U.S. embassy in Berlin, for instance, is located a stone’s throw from the Bundestag. But if the wireless carriers are using stronger encryption, which is built into modern 3G, 4G and LTE networks, then intercepted calls and other data would be more difficult to crack, particularly in bulk. If the intelligence agency wants to actually listen to or read what is being transmitted, they would need to decrypt the encrypted data.

Active surveillance is another option. This would require government agencies to “jam” a 3G or 4G network, forcing nearby phones onto 2G. Once forced down to the less secure 2G technology, the phone can be tricked into connecting to a fake cell tower operated by an intelligence agency. This method of surveillance, though effective, is risky, as it leaves a digital trace that counter-surveillance experts from foreign governments could detect.

Stealing the Kis solves all of these problems. This way, intelligence agencies can safely engage in passive, bulk surveillance without having to decrypt data and without leaving any trace whatsoever.

“Key theft enables the bulk, low-risk surveillance of encrypted communications,” the ACLU’s Soghoian says. “Agencies can collect all the communications and then look through them later. With the keys, they can decrypt whatever they want, whenever they want. It’s like a time machine, enabling the surveillance of communications that occurred before someone was even a target.”

Neither the NSA nor GCHQ would comment specifically on the key theft operations. In the past, they have argued more broadly that breaking encryption is a necessary part of tracking terrorists and other criminals. “It is longstanding policy that we do not comment on intelligence matters,” a GCHQ official stated in an email, adding that the agency’s work is conducted within a “strict legal and policy framework” that ensures its activities are “authorized, necessary and proportionate,” with proper oversight, which is the standard response the agency has provided for previous stories published by [i]The Intercept
. The agency also said, “[T]he UK’s interception regime is entirely compatible with the European Convention on Human Rights.” The NSA declined to offer any comment.

It is unlikely that GCHQ’s pronouncement about the legality of its operations will be universally embraced in Europe. “It is governments massively engaging in illegal activities,” says Sophie in’t Veld, a Dutch member of the European Parliament. “If you are not a government and you are a student doing this, you will end up in jail for 30 years.” Veld, who chaired the European Parliament’s recent inquiry into mass surveillance exposed by Snowden, told The Intercept: “The secret services are just behaving like cowboys. Governments are behaving like cowboys and nobody is holding them to account.”

The Intercept’s Laura Poitras has previously reported that in 2013 Australia’s signals intelligence agency, a close partner of the NSA, stole some 1.8 million encryption keys from an Indonesian wireless carrier.

A few years ago, the FBI reportedly dismantled several of transmitters set up by foreign intelligence agencies around the Washington DC area, which could be used to intercept cell phone communications. Russia, China, Israel and other nations use similar technology as the NSA across the world. If those governments had the encryption keys for major U.S. cell phone companies’ customers, such as those manufactured by Gemalto, mass snooping would be simple. “It would mean that with a few antennas placed around Washington DC, the Chinese or Russian governments could sweep up and decrypt the communications of members of Congress, U.S. agency heads, reporters, lobbyists and everyone else involved in the policymaking process and decrypt their telephone conversations,” says Soghoian.

“Put a device in front of the UN, record every bit you see going over the air. Steal some keys, you have all those conversations,” says Green, the Johns Hopkins cryptographer. And it’s not just spy agencies that would benefit from stealing encryption keys. “I can only imagine how much money you could make if you had access to the calls made around Wall Street,” he adds.

Image/photo GCHQ slide.

THE BREACH OF Gemalto’s computer network by GCHQ has far-reaching global implications. The company, which brought in $2.7 billion in revenue in 2013, is a global leader in digital security, producing banking cards, mobile payment systems, two-factor authentication devices used for online security, hardware tokens used for securing buildings and offices, electronic passports and identification cards. It provides chips to Vodafone in Europe and France’s Orange, as well as EE, a joint venture in the U.K. between France Telecom and Deutsche Telekom. Royal KPN, the largest Dutch wireless network provider, also uses Gemalto technology.

In Asia, Gemalto’s chips are used by China Unicom, Japan’s NTT and Taiwan’s Chungwa Telecom, as well as scores of wireless network providers throughout Africa and the Middle East. The company’s security technology is used by more than 3,000 financial institutions and 80 government organizations. Among its clients are Visa, Mastercard, American Express, JP Morgan Chase and Barclays. It also provides chips for use in luxury cars, including those made by Audi and BMW.

In 2012, Gemalto won a sizable contract, worth $175 million, from the U.S. government to produce the covers for electronic U.S. passports, which contain chips and antennas that can be used to better authenticate travelers. As part of its contract, Gemalto provides the personalization and software for the microchips implanted in the passports. The U.S. represents Gemalto’s single largest market, accounting for some 15 percent of its total business. This raises the question of whether GCHQ, which was able to bypass encryption on mobile networks, has the ability to access private data protected by other Gemalto products created for banks and governments.

As smart phones become smarter, they are increasingly replacing credit cards and cash as a means of paying for goods and services. When Verizon, AT&T and T-Mobile formed an alliance in 2010 to jointly build an electronic pay system to challenge Google Wallet and Apple Pay, they purchased Gemalto’s technology for their program, known as Softcard. (Until July 2014, it previously went by the unfortunate name of “ISIS Mobile Wallet.”) Whether data relating to that, and other Gemalto security products, has been compromised by the GCHQ and NSA is unclear. Both intelligence agencies declined to answer any specific questions for this story.

Image/photo Signal, iMessage, WhatsApp, Silent Phone.

PRIVACY ADVOCATES and security experts say it would take billions of dollars, significant political pressure, and several years to fix the fundamental security flaws in the current mobile phone system that NSA, GCHQ and other intelligence agencies regularly exploit.

A current gaping hole in the protection of mobile communications is that cell phones and wireless network providers do not support the use of Perfect Forward Security (PFS), a form of encryption designed to limit the damage caused by theft or disclosure of encryption keys. PFS, which is now built into modern web browsers and used by sites like Google and Twitter, works by generating unique encryption keys for each communication or message, which are then discarded. Rather than using the same encryption key to protect years’ worth of data, as the permanent Kis on SIM cards can, a new key might be generated each minute, hour or day, and then promptly destroyed. Because cell phone communications do not utilize PFS, if an intelligence agency has been “passively” intercepting someone’s communications for a year and later acquires the permanent encryption key, it can go back and decrypt all of those communications. If mobile phone networks were using PFS, that would not be possible — even if the permanent keys were later stolen.

The only effective way for individuals to protect themselves from Ki theft-enabled surveillance is to use secure communications software, rather than relying on SIM card-based security. Secure software includes email and other apps that use Transport Layer Security (TLS), the mechanism underlying the secure HTTPS web protocol. The email clients included with Android phones and iPhones support TLS, as do large email providers like Yahoo! and Google.

Apps like TextSecure and Silent Text are secure alternatives to SMS messages, while Signal, RedPhone and Silent Phone encrypt voice communications. Governments still may be able to intercept communications, but reading or listening to them would require hacking a specific handset, obtaining internal data from an email provider, or installing a bug in a room to record the conversations.

“We need to stop assuming that the phone companies will provide us with a secure method of making calls or exchanging text messages,” says Soghoian.

———

Documents published with this article:———

Additional reporting by Andrew Fishman and Ryan Gallagher. Sheelagh McNeill, Morgan Marquis-Boire, Alleen Brown, Margot Williams, Ryan Devereaux and Andrea Jones contributed to this story.

The post How Spies Stole the Keys to the Encryption Castle appeared first on The Intercept.


#Snowden #Encryption #Privacy #Spies #Spying #NSA #GCHQ #Snooping #Surveillance #Communications #Security @LibertyPod+ @Gadget Guru+

Seth Martin
  last edited: Mon, 13 Oct 2014 13:14:18 -0500  
Why privacy matters

Image/photo



Glenn Greenwald was one of the first reporters to see -- and write about -- the Edward Snowden files, with their revelations about the United States' extensive surveillance of private citizens. In this searing talk, Greenwald makes the case for why you need to care about privacy, even if you’re “not doing anything you need to hide."


#Greenwald #Snowden #Privacy #NSA #Surveillance #Spying #Freedom #Security @LibertyPod+
Klaus
  
brilliant!

Seth Martin
  
Glyn MoodyGlyn Moody wrote the following post Thu, 03 Jul 2014 09:42:00 -0500

We learnt about the NSA's XKeyscore program a year ago, and about its incredibly wide reach. But now the German TV stations NDR and WDR claim to have excerpts from its source code. We already knew that the NSA and GCHQ have been targeting Tor and its users, but the latest leak reveals some details about which Tor exit nodes were selected for surveillance -- including at least one in Germany, which is likely to increase public anger there. It also shows that Tor users are explicitly regarded as "extremists" (original in German, pointed out to us by @liese_mueller):
The source code contains both technical instructions and comments from the developers that provide an insight into the mind of the NSA. Thus, all users of such programs are equated with "extremists".

Such is the concern about Tor that even visitors to Tor sites -- whether or not they use the program -- have their details recorded:
not only long-term users of this encryption software become targets for the [US] secret service. Anyone who wants to visit the official Tor Web site simply for information is highlighted.

The source code also gives the lie to the oft-repeated claim that only metadata, not content, is gathered:
With the source code can be proven beyond reasonable doubt for the first time that the NSA is reading not only so-called metadata, that is, connection data. If emails are sent using the Tor network, then programming code shows that the contents -- the so-called email-body -- are evaluated and stored.

As well as all this interesting information, what's important here is that it suggests the source of this leak -- presumably Edward Snowden, although the German news report does not name him -- copied not just NSA documents, but source code too. As in the present case, that is likely to provide a level of detail that goes well beyond descriptive texts.

Source


#NSA #XKeyscore #Tor #Surveillance #Privacy #Spying #Snooping @LibertyPod+
Seth Martin
  
Lucky you don't live in the US where your life can easily be destroyed.
Klaus
  
Big topic in Germany today. Even made it to the main evening news. Especially presenting one student that was running a TOR node and could get identified by an IP address from the source code. But still only 21% of the people here believe that they are individually affected by the spying of these agencies according to a survey.

Seth Martin
  last edited: Fri, 27 Jun 2014 13:13:23 -0500  
Protestors Launch a 135-Foot Blimp Over the NSA’s Utah Data Center | Threat Level | WIRED

Image/photo

Activist groups including the Electronic Frontier Foundation and Greenpeace launched the 135-foot thermal airship early Friday morning to protest the NSA's mass surveillance programs and to announce the launch of Stand Against Spying.


(Edit) Here's a better article about it:
https://www.eff.org/press/releases/diverse-groups-fly-airship-over-nsas-utah-data-center-protest-illegal-internet-spying

#Privacy #NSA #Surveillance #Spying #Protest @LibertyPod+
Thomas Willingham
  last edited: Fri, 27 Jun 2014 12:59:00 -0500  
These things are worse than useless.

Throwing money at a blimp not only isn't useful, but makes them feel useful, so they don't have to do anything.

Give that money to somebody actually doing something, and actually do something yourself, and you'll, well, do something about it.
Seth Martin
  last edited: Sun, 25 May 2014 11:54:50 -0500  
Yet another reason to completely switch to open-source, decentralized and distributed communications and content management methods such as the red#, Friendica and XMPP/Jabber.

#^FBI: We need wiretap-ready Web sites - now - CNET

Image/photo

CNET learns the FBI is quietly pushing its plan to force surveillance backdoors on social networks, VoIP, and Web e-mail providers, and that the bureau is asking Internet companies not to oppose a law making those backdoors mandatory.


#CALEA #Wiretapping #Social Networking #Communications #Privacy #FCC #FBI #Surveillance #Security #Backdoors #Snooping #RedMatrix #Friendica #XMPP @LibertyPod+
Seth Martin
  last edited: Mon, 05 Oct 2015 14:27:11 -0500  
Marshall SutherlandMarshall Sutherland wrote the following post Sat, 03 May 2014 15:49:43 -0500
The Strangest Interview Yet With the Outgoing Head of the NSA

On network television, broadcasters tend to be very deferential when interviewing U.S. officials. This is especially true if they're wearing military dress.  In contrast, comedians who appear on fake news programs affect an adversarial, intentionally disrespectful persona for laughs. And sometimes, as in John Oliver's interview with outgoing NSA head Keith Alexander, the result is a U.S. official getting called on his slipperiness in a way that would never happen on more "serious" programs.




General Keith Alexander Extended Interview: Last Week Tonight With John Oliver (HBO)
by LastWeekTonight on YouTube


#NSA #Surveillance #Privacy #Snowden #Liberty #Spying #Freedom #Privacy #Comedy #Humor #Humour