- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
Privacy stalwart Nicholas Merrill spent a decade fighting an FBI surveillance order. Now he wants to sell you phone service—without knowing almost anything about you.
Nicholas Merrill has spent his career fighting government surveillance. But he would really rather you didn’t call what he’s selling now a “burner phone.”
Yes, he dreams of a future where anyone in the US can get a working smartphone—complete with cellular coverage and data—without revealing their identity, even to the phone company. But to call such anonymous phones “burners” suggests that they’re for something illegal, shady, or at least subversive. The term calls to mind drug dealers or deep-throat confidential sources in parking garages.
With his new startup, Merrill says he instead wants to offer cellular service for your existing phone that makes near-total mobile privacy the permanent, boring default of daily life in the US. “We’re not looking to cater to people doing bad things,” says Merrill. “We’re trying to help people feel more comfortable living their normal lives, where they’re not doing anything wrong, and not feel watched and exploited by giant surveillance and data mining operations. I think it’s not controversial to say the vast majority of people want that.”
That’s the thinking behind Phreeli, the phone carrier startup Merrill launched today, designed to be the most privacy-focused cellular provider available to Americans. Phreeli, as in, “speak freely,” aims to give its user a different sort of privacy from the kind that can be had with end-to-end encrypted texting and calling tools like Signal or WhatsApp. Those apps hide the content of conversations, or even, in Signal’s case, metadata like the identities of who is talking to whom. Phreeli instead wants to offer actual anonymity. It can’t help government agencies or data brokers obtain users’ identifying information because it has almost none to share. The only piece of information the company records about its users when they sign up for a Phreeli phone number is, in fact, a mere ZIP code. That’s the minimum personal data Merrill has determined his company is legally required to keep about its customers for tax purposes.
By asking users for almost no identifiable information, Merrill wants to protect them from one of the most intractable privacy problems in modern technology: Despite whatever surveillance-resistant communications apps you might use, phone carriers will always know which of their customers’ phones are connecting to which cell towers and when. Carriers have frequently handed that information over to data brokers willing to pay for it—or any FBI or ICE agent that demands it with a court order
Merrill has some firsthand experience with those demands. Starting in 2004, he fought a landmark, decade-plus legal battle against the FBI and the Department of Justice. As the owner of an internet service provider in the post-9/11 era, Merrill had received a secret order from the bureau to hand over data on a particular user—and he refused. After that, he spent another 15 years building and managing the Calyx Institute, a nonprofit that offers privacy tools like a snooping-resistant version of Android and a free VPN that collects no logs of its users’ activities. “Nick is somebody who is extremely principled and willing to take a stand for his principles,” says Cindy Cohn, who as executive director of the Electronic Frontier Foundation has led the group’s own decades-long fight against government surveillance. “He’s careful and thoughtful, but also, at a certain level, kind of fearless.”
Nicholas Merrill with a copy of the National Security Letter he received from the FBI in 2004, ordering him to give up data on one of his customers. He refused, fought a decade-plus court battle—and won.
More recently, Merrill began to realize he had a chance to achieve a win against surveillance at a more fundamental level: by becoming the phone company. “I started to realize that if I controlled the mobile provider, there would be even more opportunities to create privacy for people,” Merrill says. “If we were able to set up our own network of cell towers globally, we can set the privacy policies of what those towers see and collect.”
Building or buying cell towers across the US for billions of dollars, of course, was not within the budget of Merrill’s dozen-person startup. So he’s created the next best thing: a so-called mobile virtual network operator, or MVNO, a kind of virtual phone carrier that pays one of the big, established ones—in Phreeli’s case, T-Mobile—to use its infrastructure.
The result is something like a cellular prophylactic. The towers are T-Mobile’s, but the contracts with users—and the decisions about what private data to require from them—are Phreeli’s. “You can’t control the towers. But what can you do?” he says. “You can separate the personally identifiable information of a person from their activities on the phone system.”
Signing up a customer for phone service without knowing their name is, surprisingly, legal in all 50 states, Merrill says. Anonymously accepting money from users—with payment options other than envelopes of cash—presents more technical challenges. To that end, Phreeli has implemented a new encryption system it calls Double-Blind Armadillo, based on cutting-edge cryptographic protocols known as zero-knowledge proofs. Through a kind of mathematical sleight of hand, those crypto functions are capable of tasks like confirming that a certain phone has had its monthly service paid for, but without keeping any record that links a specific credit card number to that phone. Phreeli users can also pay their bills (or rather, prepay them, since Phreeli has no way to track down anonymous users who owe them money) with tough-to-trace cryptocurrency like Zcash or Monero.
Phreeli users can, however, choose to set their own dials for secrecy versus convenience. If they offer an email address at signup, they can more easily recover their account if their phone is lost. To get a SIM card, they can give their mailing address—which Merrill says Phreeli will promptly delete after the SIM ships—or they can download the digital equivalent known as an eSIM, even, if they choose, from a site Phreeli will host on the Tor anonymity network.
Phreeli’s “armadillo” analogy—the animal also serves as the mascot in its logo—is meant to capture this sliding scale of privacy that Phreeli offers its users: Armadillos always have a layer of armor, but they can choose whether to expose their vulnerable underbelly or curl into a fully protected ball.
Even if users choose the less paranoid side of that spectrum of options, Merrill argues, his company will still be significantly less surveillance-friendly than existing phone companies, which have long represented one of the weakest links in the tech world’s privacy protections. All major US cellular carriers comply, for instance, with law enforcement surveillance orders like “tower dumps” that hand over data to the government on every phone that connected to a particular cell tower during a certain time. They’ve also happily, repeatedly handed over your data to corporate interests: Last year the Federal Communications Commission fined AT&T, Verizon, and T-Mobile nearly $200 million for selling users’ personal information, including their locations, to data brokers. (AT&T’s fine was later overturned by an appeals court ruling intended to limit the FCC’s enforcement powers.) Many data brokers in turn sell the information to federal agencies, including ICE and other parts of the DHS, offering an all-too-easy end run around restrictions on those agencies’ domestic spying.
Phreeli doesn’t promise to be a surveillance panacea. Even if your cellular carrier isn’t tying your movements to your identity, the operating system of whatever phone you sign up with might be. Even your mobile apps can track you.
But for a startup seeking to be the country’s most privacy-focused mobile carrier, the bar is low. “The goal of this phone company I’m starting is to be more private than the three biggest phone carriers in the US. That’s the promise we’re going to massively overdeliver on,” says Merrill. “I don’t think there’s any way we can mess that up.”
Merrill’s not-entirely-voluntary decision to spend the last 20-plus years as a privacy diehard began with three pages of paper that arrived at his office on a February day in New York in 2004. An FBI agent knocked on the door of his small internet service provider firm called Calyx, headquartered in a warehouse space a block from the Holland Tunnel in Manhattan. When Merrill answered, he found an older man with parted white hair, dressed in a trench coat like a comic book G-man, who handed him an envelope.
Merrill opened it and read the letter while the agent waited. The first and second paragraphs told him he was hereby ordered to hand over virtually all information he possessed for one of his customers, identified by their email address, explaining that this demand was authorized by a law he’d later learn was part of the Patriot Act. The third paragraph informed him he couldn’t tell anyone he’d even received this letter—a gag order.
Then the agent departed without answering any of Merrill’s questions. He was left to decide what to do, entirely alone.
Merrill was struck immediately by the fact that the letter had no signature from a judge. He had in fact been handed a so-called National Security Letter, or NSL, a rarely seen and highly controversial tool of the Bush administration that allowed the FBI to demand information without a warrant, so long as it was related to “national security.”
Calyx’s actual business, since he’d first launched the company in the early ’90s with a bank of modems in the nonfunctional fireplace of a New York apartment, had evolved into hosting the websites of big corporate customers like Mitsubishi and Ikea. But Merrill used that revenue stream to give pro bono or subsidized web hosting to nonprofit clients he supported like the Marijuana Policy Project and Indymedia—and to offer fast internet connections to a few friends and acquaintances like the one named in this surveillance order.
Merrill has never publicly revealed the identity of the NSL’s target, and he declined to share it with WIRED. But he knew this particular customer, and he certainly didn’t strike Merrill as a national security threat. If he were, Merrill thought, why not just get a warrant? The customer would later tell Merrill he had in fact been pressured by the FBI to become an informant—and had refused. The bureau, he told Merrill, had then retaliated by putting him on the no-fly list and pressuring employers not to hire him. (The FBI didn’t respond to WIRED’s request for comment on the case.)
Merrill immediately decided to risk disobeying the gag order—on pain of what consequences, he had no idea—and called his lawyer, who told him to go to the New York affiliate of the American Civil Liberties Union, which happened to be one of Calyx’s web-hosting clients. After a few minutes in a cab, Merrill was talking to a young attorney named Jameel Jaffer in the ACLU’s Financial District office. “I wish I could say that we reassured him with our expertise on the NSL statute, but that’s not how it went down,” Jaffer says. “We had never seen one of these before.”
Merrill, meanwhile, knew that every lawyer he showed the letter to might represent another count in his impending prosecution. “I was terrified,” he says. “I kind of assumed someone could just come to my place that night, throw a hood over my head, and drag me away.” Phreeli will use a novel encryption system called DoubleBlind Armadillo—based on cutting edge crypto protocols known as…
Phreeli will use a novel encryption system called Double-Blind Armadillo—based on cutting edge crypto protocols known as zero-knowledge proofs—to pull of tricks like accepting credit card payments from customers without keeping any record that ties that payment information to their particular phone.
Despite his fears, Merrill never complied with the FBI’s letter. Instead, he decided to fight its constitutionality in court, with the help of pro bono representation from the ACLU and later the Yale Media Freedom and Information Access Clinic. That fight would last 11 years and entirely commandeer his life.
Merrill and his lawyers argued that the NSL represented an unconstitutional search and a violation of his free-speech rights—and they won. But Congress only amended the NSL statute, leaving the provision about its gag order intact, and the legal battle dragged out for years longer. Even after the NSL was rescinded altogether, Merrill continued to fight for the right to talk about its existence. “This was a time when so many people in his position were essentially cowering under their desks. But he felt an obligation as a citizen to speak out about surveillance powers that he thought had gone too far,” says Jaffer, who represented Merrill for the first six years of that courtroom war. “He impressed me with his courage.”
Battling the FBI took over Merrill’s life to the degree that he eventually shut down his ISP for lack of time or will to run the business and instead took a series of IT jobs. “I felt too much weight on my shoulders,” he says. “I was just constantly on the phone with lawyers, and I was scared all the time.”
By 2010, Merrill had won the right to publicly name himself as the NSL’s recipient. By 2015 he’d beaten the gag order entirely and released the full letter with only the target’s name redacted. But Merrill and the ACLU never got the Supreme Court precedent they wanted from the case. Instead, the Patriot Act itself was amended to reign in NSLs’ unconstitutional powers.
In the meantime, those years of endless bureaucratic legal struggles had left Merrill disillusioned with judicial or even legislative action as a way to protect privacy. Instead, he decided to try a different approach. “The third way to fight surveillance is with technology,” he says. “That was my big realization.”
So, just after Merrill won the legal right to go public with his NSL battle in 2010, he founded the Calyx Institute, a nonprofit that shared a name with his old ISP but was instead focused on building free privacy tools and services. The privacy-focused version of Google’s Android OS it would develop, designed to strip out data-tracking tools and use Signal by default for calls and texts, would eventually have close to 100,000 users. It ran servers for anonymous, encrypted instant messaging over the chat protocol XMPP with around 300,000 users. The institute also offered a VPN service and ran servers that comprised part of the volunteer-based Tor anonymity network, tools that Merrill estimates were used by millions.
As he became a cause célèbre and then a standout activist in the digital privacy world over those years, Merrill says he started to become aware of the growing problem of untrustworthy cellular providers in an increasingly phone-dependent world. He’d sometimes come across anti-surveillance hard-liners determined to avoid giving any personal information to cellular carriers, who bought SIM cards with cash and signed up for prepaid plans with false names. Some even avoided cell service altogether, using phones they connected only to Wi-Fi. “Eventually those people never got invites to any parties,” Merrill says.
All these schemes, he knew, were legal enough. So why not a phone company that only collects minimal personal information—or none—from its normal, non-extremist customers? As early as 2019, he had already consulted with lawyers and incorporated Phreeli as a company. He decided on the for-profit startup route after learning that the 501c3 statute can’t apply to a telecom firm. Only last year, he finally raised $5 million, mostly from one angel investor. (Merrill declined to name the person. Naturally, they value their privacy.)
Building a system that could function like a normal phone company—and accept users’ payments like one—without storing virtually any identifying information on those customers presented a distinct challenge. To solve it, Merrill consulted with Zooko Wilcox, one of the creators of Zcash, perhaps the closest thing in the world to actual anonymous cryptocurrency. The Z in Zcash stands for “zero-knowledge proofs,” a relatively new form of crypto system that has allowed Zcash’s users to prove things (like who has paid whom) while keeping all information (like their identities, or even the amount of payments) fully encrypted.
For Phreeli, Wilcox suggested a related but slightly different system: so-called “zero-knowledge access passes.” Wilcox compares the system to people showing their driver’s license at the door of a club. “You’ve got to give your home address to the bouncer,” Wilcox says incredulously. The magical properties of zero knowledge proofs, he says, would allow you to generate an unforgeable crypto credential that proves you’re over 21 and then show that to the doorman without revealing your name, address, or even your age. “A process that previously required identification gets replaced by something that only requires authorization,” Wilcox says. “See the difference?”
The same trick will now let Phreeli users prove they’ve prepaid their phone bill without connecting their name, address, or any payment information to their phone records—even if they pay with a credit card. The result, Merrill says, will be a user experience for most customers that’s not very different from their existing phone carrier, but with a radically different level of data collection.
As for Wilcox, he’s long been one of that small group of privacy zealots who buys his SIM cards in cash with a fake name. But he hopes Phreeli will offer an easier path—not just for people like him, but for normies too.
“I don’t know of anybody who’s ever offered this credibly before,” says Wilcox. “Not the usual telecom-strip-mining-your-data phone, not a black-hoodie hacker phone, but a privacy-is-normal phone.”
Even so, enough tech companies have pitched privacy as a feature for their commercial product that jaded consumers may not buy into a for-profit telecom like Phreeli purporting to offer anonymity. But the EFF’s Cohn says that Merrill’s track record shows he’s not just using the fight against surveillance as a marketing gimmick to sell something. “Having watched Nick for a long time, it’s all a means to an end for him,” she says. “And the end is privacy for everyone.”
Merrill may not like the implications of describing Phreeli as a cellular carrier where every phone is a burner phone. But there’s little doubt that some of the company’s customers will use its privacy protections for crime—just as with every surveillance-resistant tool, from Signal to Tor to briefcases of cash.
Phreeli won’t, at least, offer a platform for spammers and robocallers, Merrill says. Even without knowing users’ identities, he says the company will block that kind of bad behavior by limiting how many calls and texts users are allowed, and banning users who appear to be gaming the system. “If people think this is going to be a safe haven for abusing the phone network, that’s not going to work,” Merrill says.
But some customers of his phone company will, to Merrill’s regret, do bad things, he says—just as they sometimes used to with pay phones, that anonymous, cash-based phone service that once existed on every block of American cities. “You put a quarter in, you didn’t need to identify yourself, and you could call whoever you wanted,” he reminisces. “And 99.9 percent of the time, people weren’t doing bad stuff.” The small minority who were, he argues, didn’t justify the involuntary societal slide into the cellular panopticon we all live in today, where a phone call not tied to freely traded data on the caller’s identity is a rare phenomenon.
“The pendulum has swung so far in favor of total information awareness,” says Merrill, using an intelligence term of the Bush administration whose surveillance order set him on this path 21 years ago. “Things that we used to be able to take for granted have slipped through our fingers.”
“Other phone companies are selling an apartment that comes with no curtains—where the windows are incompatible with curtains,” Merrill says. “We’re trying to say, no, curtains are normal. Privacy is normal.”
Wait, they ask for your details when setting up a phone in America?
I thought y’all lived in the land of the free!
The most I’ve ever been asked for to setup a phone is my bank details, and that’s it, so they can setup direct debit for my contract
ironically that is the last thing i would want to give to a phone company
The ability to pay money for your contract?
Edit: they only ask for that if on Contract, if pay-as-you-go they ask for no details at all
I like the ability to pay, I just don’t want to allow them access or even knowledge of my bank account.
He’d sometimes come across anti-surveillance hard-liners determined to avoid giving any personal information to cellular carriers, who bought SIM cards with cash and signed up for prepaid plans with false names. Some even avoided cell service altogether, using phones they connected only to Wi-Fi.
So if this is already possible, what is his new company providing that’s new?
What’s the problem he’s trying to solve?
It appears as though cloaked wireless might be a better deal.
Phreeli offers #25/mo for unlimited talk and text with zero gigs of data per month. They give you a free 2GB at signup, but once you are done with that, you have to pay $20 for 5GB. That $25 does include government extortion and fees.
Cloped wireless also offers a $25 per month plan, but does not include extortion and fees in the price, so it would be more like $32. They give you unlimited talk and text with 500 megabytes of high-speed data and unlimited low-speed data after that.
You can pay both of them with Monero, which is why I’m definitely going to switch, but so far, I think I’m going to be going with cloaked wireless instead. Because they offer a lot of the same guarantees, but for a lower price (after data is added to phreeli)
A few things. If you sign up, don’t then go use the number with things that associate it to your real identity like a bank account or credit card. Also, if you’ve already used your phone with a provider that has your real name, then it’s compromised because you could be linked by the IMEI. Get a fresh phone that you’ve never linked to your identity before. Also, don’t transfer your number to this service. Get a new number provided by them. Additionally, pay with cryptocurrency.
This is all if you want to stay truly anonymous with no traces back to you.
Nick Merrill! This guy is awesome! I met him a few times back around 2014 when I sold him a bunch of old Dell server racks, presumably for use by his organization Calyx. This was a few years after his case against the FBI ended and he was able to talk freely about it. I’d been following the case previously so it was like meeting a personal hero, even though we were just manually humping Dell pizza boxes into his van. Legit guy, really cares.
Much respect to Nick for fighting for eleven years against the gag order he received, but i’m disappointed that he is now selling this service with cryptography theater privacy features.
I am incredibly interested in this. I was considering a switch to a provider called cloaked wireless and so I’m going to have to do some research to see what the differences are.
TLDR; Nicholas Merrill, a well known privacy activist, launched Phreeli, a phone service that lets you use mobile data and calls without giving your identity. It runs on T Mobiles network but only keeps a ZIP code and uses zero knowledge crypto so even payments are not linked to you. Merrill spent 10 years fighting the FBI over surveillance and now wants to make privacy simple and normal for everyone.
Removed by mod
Can someone with experience doing ZK Proofs please poke holes in this design?
Can someone with experience doing ZK Proofs please poke holes in this design?
One doesn’t need to know about zero-knowledge proofs to poke holes in this design.
Just read their whitepaper:
You can read the whole thing here but I’ll quote the important part: (emphasis mine)
Double-Blind Armadillo (aka Double Privacy Pass with Commitments) is a privacy-focused system architecture and cryptographic protocol designed around the principle that no single party should be able to link an individual’s real identity, payments, and phone records. Customers should be able to access services, manage payments, and make calls without having their activity tracked across systems. The system achieves this by partitioning critical information related to customer identities, payments, and phone usage into separate service components that communicate only through carefully controlled channels. Each component knows only the information necessary to perform its function and nothing more. For example, the payment service never learns which phone number belongs to a person, and the phone service never learns their name.
Note that parties (as in “no single party”) here are synonymous with service components.
So, if we assume that all of the cryptography does what it says it does, how would an attacker break this system?
By compromising (or simply controlling in the first place) more than one service component.
And:

I don’t see any claim that any of the service components are actually run by independent entities. And, even if they were supposedly run by different people, for the privacy of this system to stop being dependent on a single company behind it doing what they say they’re doing, there would also need to be some cryptographic mechanism for customers to verify that the independent entities supposedly operating different parts were in fact doing so.
In conclusion, yes, this is mostly cryptography-washing. Assuming good intentions (eg not being compromised from the start), the cryptographic system here would make it slightly more work for them to become compromised but does not really prevent anything.
The primary thing accomplished by cryptography here over just having a simple understandable “we don’t record the link between payment info and phone numbers, but you’ll just have to trust us on that” policy is to give potential customers a (false) sense of security.
If they use a payment processor, doesn’t that become the second service component?
If a payment processor implemented this (or some other anonymous payment protocol), and customers paid them on their website instead of on the website of the company selling the phone number, yeah, it could make sense.
But that is not what is happening here: I clicked through on phreeli’s website and they’re loading Stripe js on their own site for credit cards and evidently using their own self-hosted thing for accepting a hilariously large number of cryptocurrencies (though all of the handful of common ones i tried yielded various errors rather than a payment address).
Verbose
I used Calyx Institute for internet for a couple years while working online and living in a car. Solid company. Definitely gonna check out his out.
So he’s created the next best thing: a so-called mobile virtual network operator, or MVNO, a kind of virtual phone carrier that pays one of the big, established ones—in Phreeli’s case, T-Mobile—to use its infrastructure.
The result is something like a cellular prophylactic. The towers are T-Mobile’s…
So T-Mobile sees all of your DNS queries, the numbers of everyone you call, and can read all your SMS messages. And fingerprint your voice. And triangulate your position.
So, unless you avoid all of this with DOH, never making phone calls (and making sure no friends or family or employer or banks call you), never turn the phone on at your home address, and never using SMS: they’ll be able to identity the owner of your plan within the first week of typical usage data collection
you can configure some phones to encrypt all sms messages.
It’s a bit like PGP email though in that, despite it working, no one seems to use it
Don’t spread misinformation. SMS cannot have e2ee.
Sure you can wrap it with some other layer of encryption. But, as you say, that doesn’t work because the recipient can’t decrypt it.
It’s not misinformation. SMS can have end to end encryption if the messages exchanged between two people in a conversation are encrypted.
It’s an add on, in much the way PGP encryption works for email. the first handshake is unencrypted and includes each participants public keys, after that you can have it automatically encrypt each message
SMS can have end to end encryption
in theory it can, but in practice i’m not aware of any software anyone uses today which does that. (are you? which?)
TextSecure, the predecessor to Signal, did actually originally use SMS to transport OTR-encrypted messages, but it stopped doing that and switched to requiring a data connection and using Amazon Web Services as an intermediary long ago (before it was merged with their calling app RedPhone and renamed to Signal).
edit: i forgot, there was also an SMS-encrypting fork of TextSecure called SMSSecure, later renamed Silence. It hasn’t been updated in 5 (on github) or 6 (on f-droid) years but maybe it still works? 🤷
I was thinking of RCS security apparently, but was mainly talking about what’s theoretically possible.
There’s nothing stopping someone creating a E2E encrypted SMS app. The medium doesn’t matter, only the data. You could have end to end encrypted carrier pigeons if you want.
Okay, but like, if the carrier sees all your texts, don’t they also receive the public keys and can then also decrypt the messages?? I’m genuinely curious how this works. The more I think about it, the more I am convinced I don’t understand how any encryption works because the intended recipient needs the key to decrypt it, and if I’m giving them that key, but my traffic is also being watched… doesn’t whoever wants to snoop get the key too??
I feel like I have to be missing something because this just sounds like having an encrypted flash drive that you leave out in the open for someone else to grab, but it has the password written on the side of it in sharpie.
don’t they also receive the public keys and can then also decrypt the messages??
A public key is used to encrypt a message, you need the private key to decrypt.
That’s why you have public key servers. it doesn’t matter who has the public key, all they can do with it is encrypt information that only the private key holder can decrypt.
The more I think about it, the more I am convinced I don’t understand how any encryption works because the intended recipient needs the key to decrypt it
The way it was explained to me that finally made it click was so:
Imagine you have a lockable box (public key) and a key (private key), the box is empty so you give it to your friend. it doesn’t matter if anyone sees the open box because there’s nothing in it. your friend puts something private for you in the box and locks it. People see the box as he’s bringing it to you but they can’t see what’s in the box because neither him nor the people watching have the key to the box; only you do. once it gets to you you can open the box with your key
OH. So like, it’s a situation where the “lock” has 2 keys, one that locks it and one that unlocks it. You keep the “unlock” key on your person and never let it out of your sight, but let the “lock” key just gets distributed and copied anywhere because all it can do is LOCK the door, and it really doesn’t matter who locks the door so long as only you can unlock it.
That is very interesting. I still don’t quite understand how it technically works, because I thought if you encrypt something with a key, you could basically “do it backwards” to get the original information… This is probably due to getting simplified explanations of encryption though that makes them analogous to a basic cipher (take every letter, assign it to a number, add 10, convert back to new letter - can’t be read unless someone knows or figures out the “key” is 10) and now it is obvious that it is significantly more complex than that…
But I am much more confident that I understand the ‘mechanics’ of it, so thank you for the explanation!
Yep, you lock with the public key and unlock with the private.
You can’t unlock with the public, it’s one way only
So like, it’s a situation where the “lock” has 2 keys, one that locks it and one that unlocks it
Precisely :) This is called asymmetric encryption, see https://en.wikipedia.org/wiki/Public-key_cryptography to learn more, or read on for a simple example.
I thought if you encrypt something with a key, you could basically “do it backwards” to get the original information
That is how it works in symmetric encryption.
In many real-world applications, a combination of the two is used: asymmetric encryption is used to encrypt - or to agree upon - a symmetric key which is used for encrypting the actual data.
Here is a simplified version of the Diffie–Hellman key exchange (which is an asymmetric encryption system which can be used to agree on a symmetric key while communicating over a non-confidential communication medium) using small numbers to help you wrap your head around the relationship between public and private keys. The only math you need to do to be able to reproduce this example on paper is exponentiation (which is just repeated multiplication).
Here is the setup:
- There is a base number which everyone uses (its part of the protocol), we’ll call it
gand say it’s 2 - Alice picks a secret key
awhich we’ll say is 3. Alice’s public keyAis ga (23, or2*2*2) which is 8 - Bob picks a secret key
bwhich we’ll say is 4. Bob’s public keyBis gb (24, or2*2*2*2) which is 16 - Alice and Bob publish their public keys.
Now, using the other’s public key and their own private key, both Alice and Bob can arrive at a shared secret by using the fact that Ba is equal to Ab (because (ga)b is equal to g(ab), which due to multiplication being commutative is also equal to g(ba)).
So:
- Alice raises Bob’s public key to the power of her private key (163, or
16*16*16) and gets 4096 - Bob raises Alices’s public key to the power of his private key (84, or
8*8*8*8) and gets 4096
The result, which the two parties arrived at via different calculations, is the “shared secret” which can be used as a symmetric key to encrypt messages using some symmetric encryption system.
You can try this with other values for
g,a, andband confirm that Alice and Bob will always arrive at the same shared secret result.Going from the above example to actually-useful cryptography requires a bit of less-simple math, but in summary:
To break this system and learn the shared secret, an adversary would want to learn the private key for one of the parties. To do this, they can simply undo the exponentiation: find the logarithm. With these small numbers, this is not difficult at all: knowing the base (2) and Alice’s public key (8) it is easy to compute the base-2 log of 8 and learn that
ais 3.The difficulty of computing the logarithm is the difficulty of breaking this system.
It turns out you can do arithmetic in a cyclic group (a concept which actually everyone has encountered from the way that we keep time - you’re performing
mod 12when you add 2 hours to 11pm and get 1am). A logarithm in a cyclic group is called a discrete logarithm, and finding it is a computationally hard problem. This means that (when using sufficiently large numbers for the keys and size of the cyclic group) this system can actually be secure. (However, it will break if/when someone builds a big enough quantum computer to run this algorithm…)- There is a base number which everyone uses (its part of the protocol), we’ll call it
public keys
I’m not too sure how cryptography works, but I’m pretty sure it’s fine if other people have your public key. I’m reasonably sure it’s actually required in a system with public and private keys.
I think they may have meant it like that, email does not support PGP out of the box, it is just the medium the data is using. In the same way you can send data via SMS that is encrypted when it leaves one device and decryped when it reaches its destination (unless the recipient doesn’t have a way to decrypt it, which I think is both of your points).
Merrill says. “If we were able to set up our own network of cell towers globally, we can set the privacy policies of what those towers see and collect.”
Well that’s ambitious
can I use this for safely tormenting linux ISO’s?
I make sure to lock my ISOs in a dungeon every night for maximum tormenting speed.
I find the best way to torment Linux ISOs is using the live environment exclusively to:
- Bing search “how to burn iso to usb” 1a) immediately change search to “how to burn iso to usb linux”
- Bing search “windows 11 install free iso” 2a) download the first link that is an iso
- Follow instructions from step 1a 90% to the letter using the iso from step 2a, changing the 10% ignored until it “works”.
3a) bonus points if you try to write the image to the same USB the live environment is on - if the “chosen” method allows this, congratulations secret ending unlocked, skip to step 6. - Bing search “is windows 11 free install safe” 4a) change search to “is windows 11 free install safe reddit”
- Either unmount the live USB while it is still running, or just hard power down the computer, without even closing the browser.
- Plug USB drive directly into USB power wall adapter and never plug it into a computer again.
Now, before you ask, yes, you absolutely could do all of this in a VM, but I’ve found it is more tormenting if there is real actual hardware wasted and/or at risk of damage. Linux is the natural enemy of consumerism, so buying a 12 pack of flash drives just to do this on a thinkpad over and over until the thinkpad dies really hurts the linux ISO to their core.
And only feed it every 3 days. Oh, and play fucking ‘dembow’ at full blast 24/7, that’ll drive anyone crazy.
You can already use a fake address for any prepaid sim
I have to read this all soon. But I hope something like this shows up for Canada.
In Canada? Not happening. Canada is chasing the EU and the UK in everything related to chat control and all that crap.
Unfortunately.









