Friday, finally. Time for the weekly roundup.
On the podcast this week: the latest Epstein dump, how it’s really a disaster in a lot of ways, and Moltbot and its terrible security. In the section for subscribers at the Supporter level, two recent stories about a fundamental issue exposing a bunch of very sensitive data.
And in this week’s interview, Joseph talks to Samuel Bagg, assistant professor of political science at the University of South Carolina. Bagg recently wrote a fascinating essay about how the problem with lots of things might be knowledge-based (people believing stuff that’s wrong or dangerous) but the solution is not more knowledge. It’s all about social identity.
Subscribers at the Supporter level get early access to interview episodes. Next week Emanuel talks to Patrick Klepek of Remap! Listen to the weekly podcasts on Apple Podcasts, Spotify, or YouTube.
In other news: If you missed getting a physical copy of the zine, we got you. Our zine about ICE surveillance tactics is now available as a PDF! Read more about why we’re releasing it free in the digital realm, and get it here.
LOCK IT DOWN
The FBI has been unable to access a Washington Post reporter’s seized iPhone because it was in Lockdown Mode, a sometimes overlooked feature that makes iPhones broadly more secure, according to recently filed court records. The court record shows what devices and data the FBI was able to ultimately access, and which devices it could not, after raiding the home of the reporter, Hannah Natanson, in January as part of an investigation into leaks of classified information. It also provides rare insight into the apparent effectiveness of Lockdown Mode, or at least how effective it might be before the FBI may try other techniques to access the device.
TOTAL MESS
The Department of Justice left multiple unredacted photos of fully nude women or girls exposed as part of Friday’s dump of more than 3.5 million pages of files related to the investigations and prosecutions of Jeffrey Epstein and Ghislaine Maxwell. Unlike the majority of the images in the released files, both the nudity and the faces of the people were not redacted, making them easy to identify. In some of the photos, the women or girls were either fully nude or partially undressed, posed for cameras, and exposed their genitals. The DOJ removed the photos after 404 Media requested comment.
BAD VIBES
According to a new study from a team of researchers in Europe, vibe coding is killing open-source software (OSS) and it’s happening faster than anyone predicted. Thanks to vibe coding, a colloquialism for the practice of quickly writing code with the assistance of an LLM, anyone with a small amount of technical knowledge can churn out computer code and deploy software, even if they don't fully review or understand all the code they churn out. But there’s a hidden cost. Vibe coding relies on vast amounts of open-source software, a trove of libraries, databases, and user knowledge that’s been built up over decades.
DEMOCRACY DIES
The Washington Post has been a critical institution in the lives of millions of people. What we’re seeing, though, is not a mistake. Unlike the Graham family in the late 1990s, Jeff Bezos has no reason to try to make his newspaper better or to try to best serve its readers. The newspaper's finances are barely a rounding error compared to Bezos's wealth, but what its journalists do—accountability journalism about the rich and powerful—does not serve someone who is rich and powerful. The Washington Post and many of its reporters are no longer useful to Bezos, and so he has decided to get rid of them. The Washington Post’s journalists, many of whom lost their jobs this week, have continued to do critical work, but Bezos has been systematically making the paper worse for years.
READ MORE
- Musk to Epstein: ‘What Day/Night Will Be the Wildest Party on Your Island?’
- Exposed Moltbook Database Let Anyone Take Control of Any AI Agent on the Site
- Privacy Telecom ‘Cape’ Introduces ‘Disappearing Call Logs’ That Delete Every 24 Hours
- Wedding Photo Booth Company Exposes Customers’ Drunken Photos
- Hackers and Trolls Target Wave of ICE Spotting Apps
- Scientists Keep Discovering Mysterious Ancient Tunnels Across Europe
- This Tool Searches the Epstein Files For Your LinkedIn Contacts
- This SpaceX Situation: Not Good!
- The DOJ Redacted a Photo of the Mona Lisa in the Epstein Files
- Inspector General Investigating Whether ICE's Surveillance Tech Breaks the Law
404 MEDIA IN THE WILD
I went on Science Friday to talk about deepfakes and the Grok debacle, and if you're an Aussie you might have heard me discussing it there, too.
The English version of the documentary Emanuel appeared in about AI, called "AI: The Death of the Internet" is out now!
Joseph joined John Stewart to talk about ICE surveillance tactics, as well as PBS News Hour.
And this morning, Jason was on WNYC talking ICE and surveillance as well.
If you'd like us to come on your show, podcast, or panel, contact us.
Replying to DOJ Released Unredacted Nude Images in Epstein Files, Rob writes:
“Inexcusable. I worked in ediscovery for a bit and I would be so ashamed if this happened on anything on my watch. Like, it is a shitty job to spend 12+ hours scanning/formatting/bates-stamping/printing documents + doing the redactions and having to see disturbing images, but part of why you put up with the boredom and the horror is because at the end of the day, you are playing your part in helping people get justice.”
And in response to Our Zine About ICE Surveillance Is Here, Cam writes:
“Fantastic. Got mine in the mail yesterday. Phenomenal labor of love - excited to pass this around and share the PDFs as well. Keep doing what you're doing.”
We will with your support! Thank you!
This is Behind the Blog, where we share our behind-the-scenes thoughts about how a few of our top stories of the week came together. This week, we discuss AI bubble hysteria, "just go independent," and more.
JOSEPH: This week we reported how the FBI has been unable to get into a Washington Post reporter’s iPhone because it was in Lockdown Mode. Side note, I wonder how the insane cuts at The Post are going to impact its digital or physical protection of journalists, if at all. This court record was very, very interesting in that it’s a quite rare admission of why exactly authorities were unable to access a device.
I don’t think there’s an area of cybersecurity, which we have a lot of reporting on, that is constantly in flux as mobile forensics. Nothing stays still, even for what feels like five minutes. There are constant tech developments, both on the side of Apple and Google, then on companies trying to break into those phones, like Cellebrite and Grayshift, the creator of Graykey.
As you probably remember, this dynamic really started back in 2016 after the San Bernardino terrorist attack. Authorities couldn’t get into an iPhone linked to the attack; the DOJ tried to legally compel Apple to build a backdoor into facilitate brute forcing the PIN; Apple declined to do so saying it would fundamentally lower security for all users; DOJ backed off when the FBI had a third party break into the phone, which was later revealed to be Azimuth Security (as I’ve said before, I had one source on that but The Washington Post had more, so managed to publish. It sucks they are gutting their journalists).
There have been some other high profile cases of authorities not being able to get into phones, but nothing quite like that Apple vs. FBI case. After Azimuth unlocked the phone, you had other companies largely emulate the capability of being able to unlock modern-ish iPhones. Probably the first of those was Grayshift, which Forbes first reported the existence of. Oh my god, a company has a little box that can just unlock iPhones even with their brute force protections? It was pretty nuts at the time but looks quaint now.
Then you get into what I usually refer to as the cat and mouse dynamic. Grayshift, and then Cellebrite, had the tools to break into recent iPhones. So then Apple introduced some other features. There was USB Restricted Mode, which changed the lightning cable port into a charge-only interface, meaning forensic tools couldn’t connect to it. Grayshift then said it had defeated the feature. Some cops also explored the possibility of not getting a warrant to more quickly download data in order to circumvent it.
The world kept spinning and both sides of the fight kept doing their thing. As we saw from Cellebrite and Graykey related leaks, generally these tools could get into older or even recent phones, but might have an issue with the latest device running the latest operating system. Then they’d find a way in and the cycle would continue.
The next major development was the iPhone rebooting we revealed in 2024. That was returning iPhones that hadn’t been unlocked for a few days (presumably by the user) to a state that makes them harder to unlock. I’m not sure what the latest on that is regarding mitigations.
My point is that this story will never end, really. There will always be some sort of development in the mobile forensic space. Always some little setting or tweak or new attack that, unless you’re following closely, you’re probably not going to know about. Which makes it hard to really know when your phone is really secure.
I suppose that’s the attraction of Lockdown Mode: it is supposed to stop connections between the phone and a forensic device completely, so users don’t have to worry about niche software idiosyncrasies they probably have no idea exist.
I mentioned this in passing on Bluesky when I posted the article, but I think Apple has done a pretty bad job of explaining Lockdown Mode can, seemingly, protect against mobile forensic tools. Much of the marketing and stuff on the company’s site is about protecting users from mercenary spyware (read: NSO Group, Paragon, etc). There’s no mention of mobile forensics tech like Cellebrite or Graykey. Maybe that’s for a couple of reasons: Cellebrite and Graykey absolutely have legitimate uses, and are used to combat serious crime every single day. They are abused, absolutely, but they’re also used constantly in all manner of child abuse, financial fraud, murder, kidnapping investigations. Basically, any crime, really. So, having Apple on its website saying ‘we defeat the tool that lets cops collect evidence on murderers’ is probably not a look they want. Spyware is much easier to publicly push back against. That industry is saturated with abuse.
But, now we know that Lockdown Mode can protect against these tools if you’re at risk of your device being seized and searched. That is obviously very useful information for journalists, activists, protesters, and others to know.
JASON: It has been a brutal week for journalism, a brutal year, a brutal decade. For journalism and for the world more broadly. It has been hard to pay attention to much of anything besides ICE, and I know many people who can’t think about anything else at all right now, and I completely understand that. I have done that at times in my life and it turns me extremely defeatist and useless, so over the last few years I have really focused on working hard and doing things that I feel are meaningful, using my journalism skills and my platform, and then either logging off or explicitly focusing on being with my friends and family, exercising, or otherwise doing things that bring me joy. This is a really lucky place to be in, which I don’t take for granted, but I figure I am more useful energized and not fully miserable all the time, and so I make sure that I have some sort of balance in my life.
That’s a bit of a non sequitur preamble before I get to my real thought, which is about independent journalism, starting a business, “just going independent” and things of this nature. Whenever there are mass layoffs like we saw at the Washington Post this week, there’s understandably an online debate about the sustainability of journalism, and also a debate about whether going independent can work, who can go independent, how to do it, etc. The ones I’ve seen in the last few days feel pretty pessimistic to me. And it’s true that there are far fewer journalism jobs, there are now a tiny number of traditional publications hiring, and it’s getting harder to stand out amongst a sea of substacks and independent sites, especially considering the additional pressures of competing against AI slop, etc. I also see a lot of people saying that there is subscription fatigue, debating the ethics of paywalls, that there are concerns about legal resources, healthcare, running a business, editing help, etc. These are all real, and everyone’s situation is different.
I understand the impulse to have these conversations but I also never really know what to say about them, and so I usually don’t participate, because honestly the discourse on this topic feels extremely fraught. We are talking about people’s livelihoods, their life’s work, their personal appetite for being an entrepreneur, their healthcare situation. And this always happens immediately after a bunch of people lose their jobs, so it always happens during a very raw situation.
So again, deep breath, knowing I’m coming from a place of unimaginable privilege having been a part of 404 Media: Going independent is the best thing I have ever done in my life. I did not know or ever hope to dream that anything like this could have happened to me. I am a happier person in every conceivable possible way having gone independent. I work a lot, but I also have more balance in my life than I have ever had. I know this is not the case for everyone, but it is possible to do this and make a living. It is still possible. And for many people I think it is better to at least try to start something new than it is to try to hitch yourself to another dying business. (This is the reason for my preamble: It sometimes feels weird/bad/wrong to feel somewhat secure when so many people do not.)
If you are a journalist and you are thinking of trying this, talk to me first. I am happy to talk to you. A lot of the hurdles, problems, and fears expressed by people about going independent are real, but they are also not insurmountable and often they are not as big of a deal as you would expect. Legal help is available. Editing help is available. Healthcare … healthcare is the hardest thing, it’s a big thing, and I don’t have a good answer there. Running a very basic business does not take that much time, and much of it is automated through platforms like Ghost. Subscription fatigue, I’m sorry, is fake. Well, it’s real on an individual level, but the amount of people that you need to subscribe to something to approximate what a journalism job pays is not that many. There are hundreds and hundreds of millions of people who speak English and you need to convince a few hundred of them that your work is worth supporting. This is possible. It’s doable. You need to post a lot and you may need to learn to do a few new things. You need to be kind of shameless, which didn’t come easy to me and still doesn’t. But we have learned a lot in the last few years. If this is something you want to do, email me.
EMANUEL: It’s time once again to talk about the big AI picture: Bubble or no bubble, the end of all knowledge based work or a useless tool, a civilization shifting technology or a slop machine?
To be honest, I’m not going to satisfactorily answer any of these questions but I see all the same ridiculous, shocking, scary claims about AI you’re seeing, and I want to talk through some of the way I processed them this week.
As people were losing their minds over Moltbook this week and discussing how powerful the latest LLMs are at coding, I was reporting a story about a company that heavily relies on generative AI, and how it’s failing that company’s workers and users. My reporting in this case requires sifting through a massive amount of text without much of a direction, so while everyone was talking about how powerful AI is right now, I thought: why not use one of these LLMs to do some of that work for me?
This idea never got off the ground for technical reasons, but it made me think a lot about how I could incorporate AI into my workflow. A lot of reporting is pretty tedious because it requires sifting through a ton boring material in order to maybe find something important without having any idea what it might be, and I can easily imagine AI being helpful for that task. AI currently has the ability to sift through video, transcripts, PDFs, social media accounts, etc. The problem I kept coming back to is that if I used AI to do any of that sifting for me, I would have no idea what it may have missed. Maybe it could find useful leads much faster, but so often what happens during this process is that I’ll read through a document and see something that’s only tangentially related, or a name I didn’t recognize, and follow those leads not because it makes logical sense, but because I’m curious and bored of looking at the same document and need a change of pace. Sometimes, that’s how I find some of the most interesting stuff in my reporting. As far as I’m aware, no current LLM can do that, and even if it did, I would have to trust that it didn’t miss any of those opportunities because of an error.
Then I thought, while all of that may be true, I could still stick to my manual scanning process but use AI for a first pass. But I felt the overwhelming desire to be lazy begin to take hold before I even finished the thought. As numerous studies have shown, reliance on automated tools leads to overreliance on automated tools and, ultimately, deskilling. I could feel myself atrophy just by entertaining the idea. Ultimately I’m still open to the possibility of using LLMs in some similar fashion, but at the moment it seems like more trouble than it’s worth.
When I looked back over at X, I saw both AI boosters and skeptics agree that something has changed in the last few months. People who used to think the entire thing was a bubble now say they see AI embed itself into tech company workflows in a way that’s irreversible. At the same time, Moltbook, the social media for AI agents that was driving much of this hype, was revealed to be a sham and a security nightmare.
I’m tired of saying it and I’m sure you’re tired of reading it but my position remains that AI can be both an overhyped tech bubble, and, at the same time, a technology that is here to stay and that will fundamentally change our lives in many ways.
It’s wild to me that people who were old enough to live through or at least understand the history of the dot com boom can’t hold this thought in their heads. The internet was new and ‘overhyped’ and a lot of companies raised way too much money, so the market had a very dramatic correction, but obviously the internet was here to stay and its impact can’t be overstated.
I’m not predicting the future but it certainly feels like we’re on a similar path with AI right now.
Finally, these are my two takeaways from yet another week of AI hysteria:
- We will continue to focus on what AI is actively doing right now rather than speculating on how powerful it can be theoretically and will be in the future.
- Our philosophy has always been to ground our reporting in first hand experience with the technology we’re reporting on, and I think that I have fallen a little behind in that respect, and need to experiment with some of the more recent AI tools.
SAM: Last night I decided to write a short blog in a category I’d call “check this shit out,” where the purpose isn’t to solve a mystery or break news, but to just point at something everyone is talking about and add context to it. I saw people on Bluesky and Reddit posting an image from the Epstein files of the Mona Lisa with a redaction over the portrait’s face. The image itself is an instant classic, but the context behind it is that thousands of instances of victims’ personal information, including faces and full names, have been exposed as part of this Epstein data dump disaster. So redacting the face of a 500 year old painting seemed patently absurd.
I sent a request for comment to the DOJ specifically asking why it was redacted and also whether AI was used in redactions, because that’s another piece of context to this story: the people making the image go triple platinum on every social media platform were also speculating (or straight up declaring) that a facial recognition system was redacting images in the files, and that’s why an unrelated, centuries-old female face was caught in the net. This isn’t the craziest theory ever; AI systems similarly overindex for things like nudity, sexual speech, and terms-violating content across all social media platforms, and it’s a huge problem. Sending overzealous bots to moderate complex, nuanced user generated content is messy and requires a lot of human oversight, and usually puts the onus on users to appeal and attempt to correct (or abide by) rules that aren’t even made explicit. AI catching the Mona Lisa and not catching real victims' faces is not the wildest theory in the world. But I don't know, and don't want to speculate, about whether the DOJ used AI to do redactions. If they did, that's bad. If they didn't, the situation is still messy and terrible.
I published the story with a note that the DOJ did not immediately respond to a request for comment, and about an hour later — incredibly speedy, considering they took a day and a half to remove sexually explicit images of victims when we flagged them last weekend — someone a the DOJ responded saying they redacted the Mona Lisa because it’s actually a victim’s face in the photo. And now, looking closely at the image itself (which is already cropped tightly to exclude any background), I can see what seems like a thumb or something along the edge, as if it’s a person holding a printed photo. Maybe it’s from a novelty shop outside the Louvre, or maybe it’s one of those cutout photo ops where you stand behind an image and put your face in the hole. Either way, it’s not just a photo of the painting hanging in the Louvre like everyone (including myself) assumed. The story went from “wow the DOJ incompetently left so many images unredacted of real women while protecting a painting, how absurd” to “wow this image is actually another tiny piece of evidence in the most harrowing criminal investigation of our lifetime.” I literally said WHOA WHOA WHOA out loud alone in my apartment when I got the DOJ’s email. They did not answer the AI question.
This is simply a BTB and I don’t have any grand lesson to end with, but I do want to say that I don’t think — and I don’t think anyone’s said or thinks this, either, but just to be clear about it — that the redaction being genuine and correct changes the context of the larger story, and why I blogged it in the first place, which is that the process of protecting victims while releasing these files has been a disaster. Their lawyers have said their phones are ringing off the hook with victims realizing their information was made public in these files. We talked more about this on the podcast this week, if you're interested.
Here’s how Roblox’s age checks work