[syndicated profile] snopes_feed

Posted by Jordan Liles

Social media users shared misinformation referencing CNN's genuine investigative reporting about a dark corner of the internet.
[syndicated profile] efforg_feed

Posted by Cindy Cohn, Betty Gedlu

For years, EFF has pushed technology companies to make real human rights commitments—and to live up to them. In response to growing evidence that Palantir’s tools help power abusive immigration enforcement by ICE, we sent the company a detailed letter asking how the promises in its own human rights framework extends to that work.

This post explains what we asked, how Palantir responded, and why we believe those responses fall short. EFF is not alone in raising alarms about Palantir; immigrants' rights groups, human rights organizations, journalists, and former employees have raised similar concerns based on reports of the company's role in abusive immigration enforcement. We focus here on Palantir’s own human rights promises.

At the outset, we appreciate that Palantir was willing to engage respectfully, and we recognize that confidentiality and security obligations can limit what it can say. Nonetheless, measured against Palantir's own human rights commitments, its decision to keep powering ICE with tools used in dragnet raids and discriminatory detentions is indefensible. A good-faith application of those commitments should lead Palantir to end its contract with ICE, and refuse new, or end current, contracts with any other agency whose work predictably violates those commitments.

Palantir’s Public Promises

Palantir has long said it performs comprehensive human rights analysis on its work. It has also worked with ICE for years, apparently in a more limited capacity than today. It has publicly embraced the UN Guiding Principles on Business and Human Rights, the Universal Declaration of Human Rights, and the OECD Guidelines for Multinational Enterprises. Additionally, in its response to EFF, Palantir says its legal responsibilities are only “the floor” for broader risk assessments.

That was the point of our letter. We asked what human rights due diligence Palantir conducted when it first contracted with ICE and DHS; whether it performed the “proactive risk scoping” it advertises, how it reviews work over time, what it has done in response to reports of misuse, and whether it has used “every means at [its] disposal”—including contract provisions, third‑party oversight, and termination—to prevent or mitigate harms.

For the most part, Palantir did not answer our accountability questions. It did correct one point: Palantir says it does not currently work with CBP, and available evidence supports that, though it also made clear it could work with CBP in the future.

Palantir also raised a red herring it often deploys in response to criticism. It denied building a 'mega' or 'master' database for ICE and denied creating a database of protesters, which some ICE agents have claimed to have been built. We call it a red herring because those denials sidestep the central issues: what capabilities Palantir's tools actually provide to ICE.

To be clear, EFF has never claimed that Palantir is building a single centralized database. Our concern is grounded in how Palantir’s tools allow ICE to query and analyze data from multiple databases through a unified interface—which from an agent’s perspective can be a distinction without a difference.

In the sections that follow, we compare Palantir’s account of its work for ICE with evidence about how its tools seem to be used, and explain why legality, internal process, and sustained “engagement with the institutions whose vital tasks exist in tension with certain human rights” are no substitute for real human rights due diligence—because respect for human rights must be measured by outcomes, not just process.

Palantir’s ICE Work Undermines Its Own Standards

Palantir says ICE uses its ELITE tool for “prioritized enforcement”: to surface likely addresses of specific people, such as individuals with final orders of removal or high‑severity criminal charges. But according to sworn testimony in Oregon, ICE agents use ELITE to determine where to conduct deportation sweeps, and the system “pulled from all kinds of sources” to identify locations for raids aimed at mass detentions, including information from the Department of Health and Human Services such as Medicaid data. A leaked ELITE user guide for 'Special Operations' also instructs operators to disable filters to "display all targets within a Special Operations dataset." Those details directly conflict with Palantir’s narrow description of ELITE’s role.

Additionally, Palantir's response leans on legal authority and the Privacy Act. But it does not identify any specific lawful basis for using Medicaid data in this way or explain how its software enables that access. Even if a legal theory exists, turning sensitive medical information into fuel for dragnet sweeps is hard to reconcile with its commitments to privacy, equity, and the rights of impacted communities. Its own human rights framework requires grappling with foreseeable harms its products may enable, not just invoking possible legal authorization.   

Reporting shows that many people detained by ICE had no criminal record, much less a serious one, and in many cases no final order of removal. An overwhelming percentage of those detained were, or appeared to be, from Central and South America, and nearly one in five ICE arrests were street arrests of a Latine person with neither a criminal history nor a removal order.

These facts raise obvious questions about discriminatory impact, racial profiling, and whether Palantir's tools are facilitating detention practices far broader than the company claims. Palantir's response does not meaningfully engage those questions, despite the company's commitments to non-discrimination and due process.

EFF’s letter asked Palantir to explain how it is honoring its commitments to civil liberties in light of reports linking Palantir-owned systems to facial recognition and other tools used to identify and target people engaged in observing and recording law enforcement, including in connection with the deaths of Renée Good and Alex Pretti. The letter also cites an incident in which an officer scanned protesters’ and observers’ faces and threatened to add their biometrics to a “nice little database.” Palantir’s response denies involvement in any such database.

A narrow denial about a single database does not answer the broader question: if ICE, its customer, claims it has this capability, what has Palantir done to ensure its tools are not used to chill protected speech, retaliate against observers, or facilitate targeting of people engaged in First Amendment‑protected activity? For a company that claims to value democracy and civil liberties, this is not a marginal issue; it goes to the heart of its human rights commitments.

Legality, Process, and Engagement with ICE Are Not Human Rights Standards

As mentioned above, Palantir leans heavily on legal compliance. It says government data sharing is “subject to, and governed by, data sharing agreements and government oversight” and that any sharing it facilitates is done according to “legal and technical requirements, including those of the Privacy Act of 1974.” It describes its role in ELITE as “data integration,” enabling ICE “to incorporate data sources to which it has access,” including data shared under inter‑agency agreements.

EFF is very familiar with the Privacy Act—we are suing the Office of Personnel Management over it currently. But Palantir’s response does not clarify how ICE legally has access to this information, how Palantir ensures that it follows those legal processes, or how Palantir’s software may have enabled access in the first place. More critically, that is still a legal answer to a human rights question, and legal compliance alone is insufficient as a human rights standard.

Human rights due diligence requires assessing foreseeable harms, responding to credible evidence of abuse, and changing course when the facts demand it—something Palantir, on paper, recognizes. That’s why it stresses that its legal responsibilities are only “the floor for [its] broader risk assessments,” pointing to the way it built toward GDPR‑style data protection principles and incorporated international humanitarian law principles before those requirements were formalized. If those commitments mean anything, Palantir has to explain how specific practices—like enabling ICE to use Medicaid data in dragnet raids—square with that broader standard.

Palantir also leans heavily on process. It points to a “layered approach” to risk, frameworks that purportedly examine multiple dimensions of privacy and equity, and “indelible” audit logs that track how its tools are used. Audit logs are not sufficient for protecting human rights. There is a long history of authoritarian regimes keeping extensive logs of their human rights abuses. Those structures can be useful for protecting human rights, but only if they are used to detect harm, trigger reassessment, and lead to changes in design, access, support, or contract enforcement when credible reports of abuse emerge.

That is why we pressed Palantir to spell out clearly what reports of misuse Palantir has received, what changes it made, and on what timeline. Again, instead of offering specific examples, Palantir points back to its internal framework and its willingness to “move towards the hardest problems” as evidence of effective efforts. But human rights are an outcome, not just a process.

Human rights due diligence is not a one-time approval at contract signing; under the UN Guiding Principles, it is supposed to be continuous, with new facts triggering reassessment. Complaints, media reports, leaks, litigation, and sworn testimony are exactly the kinds of events that should prompt review. If Palantir has an account for that work— how often it reviews ICE contracts, who conducts the reviews, what triggers them, and how findings reach the Board— it had every opportunity to describe it. Instead, it offered a generic assurance that it remains committed to human rights without engaging in the specifics. Confidentiality may sometimes limit disclosure, but it is no substitute for accountability.

What Needs to Happen Next 

Palantir wants credit for “mov[ing] towards the hardest problems” and engaging with institutions whose missions it says are “in tension with certain human rights” while having a human rights framework. But when the record includes violent raids, dragnet detentions, use of sensitive medical data, discriminatory targeting, retaliation against observers, and deaths tied to immigration enforcement operations, pointing to a values page is not enough; it has to reckon with the results.

Voluntary corporate human rights policies often function as weak accountability mechanisms: companies can tout principles, publish policies, and answer criticism with polished statements while changing very little on the ground. Palantir’s response fits that pattern all too well. EFF will continue to challenge its role in abusive immigration enforcement and demanding more accountability for technology vendors whose tools enable human rights violations. We are also happy to continue a dialogue with Palantir to that end. For now, this much is clear: Palantir needs to reconsider its contract with ICE and with all agencies whose work predictably violate human rights.

[syndicated profile] rwitchesvspatriarchy_feed

Posted by /u/swells61

It’s been nearly a year on HRT but I am finally starting to see her in mirror

Just wanted to share some unexpected joy that happened tonight. Been dysphoric and stressed the past couple weeks but even if it was for a moment I saw who I always wanted to be in the mirror tonight and that meant the world to me.

submitted by /u/swells61
[link] [comments]
[syndicated profile] hackernewsfp_feed
  1. Vercel April 2026 security incident
    (802 points, bleepingcomputer.com, comments)
  2. College instructor turns to typewriters to curb AI-written work
    (477 points, sentinelcolorado.com, comments)
  3. Archive of BYTE magazine, starting with issue #1 in 1975
    (580 points, archive.org, comments)
  4. What are skiplists good for?
    (281 points, antithesis.com, comments)
  5. SPEAKE(a)R: Turn Speakers to Microphones for Fun and Profit [pdf] (2017)
    (183 points, usenix.org, comments)
  6. Changes in the system prompt between Claude Opus 4.6 and 4.7
    (342 points, simonwillison.net, comments)
  7. Game devs explain the tricks involved with letting you pause a game
    (431 points, kotaku.com, comments)
  8. The seven programming ur-languages (2022)
    (334 points, madhadron.com, comments)
  9. The RAM shortage could last years
    (325 points, theverge.com, comments)
  10. The Bromine Chokepoint
    (209 points, warontherocks.com, comments)
  11. Nanopass Framework: Clean Compiler Creation Language
    (137 points, nanopass.org, comments)
  12. Modern Common Lisp with FSet
    (182 points, common-lisp.dev, comments)
  13. Notion leaks email addresses of all editors of any public page
    (382 points, twitter.com/weezerosint, comments)
  14. The world in which IPv6 was a good design (2017)
    (225 points, apenwarr.ca, comments)
  15. Updating Gun Rocket through 10 years of Unity Engine
    (110 points, jackpritz.com, comments)
  16. Show HN: Prompt-to-Excalidraw demo with Gemma 4 E2B in the browser (3.1GB)
    (140 points, teamchong.github.io, comments)
  17. Zero-Copy GPU Inference from WebAssembly on Apple Silicon
    (114 points, abacusnoir.com, comments)
  18. 2,100 Swiss municipalities showing which provider handles their official email
    (214 points, mxmap.ch, comments)
  19. Six Levels of Dark Mode (2024)
    (101 points, cssence.com, comments)
  20. NASA Shuts Off Instrument on Voyager 1 to Keep Spacecraft Operating
    (229 points, nasa.gov, comments)
  21. I wrote a CHIP-8 emulator in my own programming language
    (76 points, github.com/navid-m, comments)
  22. Show HN: Faceoff – A terminal UI for following NHL games
    (119 points, vincentgregoire.com, comments)
  23. Show HN: Shader Lab, like Photoshop but for shaders
    (150 points, basement.studio, comments)
  24. Ask HN: How did you land your first projects as a solo engineer/consultant?
    (271 points, news.ycombinator.com, comments)
  25. Vercel says internal systems hit in breach
    (377 points, decipher.sc, comments)
  26. Prove you are a robot: CAPTCHAs for agents
    (103 points, browser-use.com, comments)
  27. It's cool to care (2025)
    (109 points, alexwlchan.net, comments)
  28. My first impressions on ROCm and Strix Halo
    (56 points, marcoinacio.com, comments)
  29. Spiral staircase with a single guardrail once led to the top of the Eiffel Tower
    (50 points, smithsonianmag.com, comments)
  30. Keep Pushing: We Get 10 More Days to Reform Section 702
    (182 points, eff.org, comments)
  31. Metatextual Literacy
    (54 points, jenn.site, comments)
  32. The creative software industry has declared war on Adobe
    (231 points, theverge.com, comments)
  33. Bypassing the kernel for 56ns cross-language IPC
    (79 points, github.com/riyaneel, comments)
  34. Binary GCD
    (88 points, algorithmica.org, comments)
  35. Airline worker arrested after sharing photos of bomb damage in WhatsApp group
    (277 points, lbc.co.uk, comments)
  36. Binary Dependencies: Identifying the Hidden Packages We All Depend On
    (51 points, vlad.website, comments)
  37. I dug into the Postgres sources to write my own WAL receiver
    (51 points, medium.com/mailbox.sq7, comments)
  38. Reading Input from an USB RFID Card Reader
    (40 points, kevwe.com, comments)
  39. Reverse Engineering ME2's USB with a Heat Gun and a Knife
    (65 points, github.com/coremaze, comments)
  40. Swiss authorities want to reduce dependency on Microsoft
    (224 points, swissinfo.ch, comments)
  41. SI Units for Request Rate (2024)
    (85 points, entropicthoughts.com, comments)
  42. When moving fast, talking is the first thing to break
    (113 points, daverupert.com, comments)
  43. Notes from the SF peptide scene
    (133 points, 12gramsofcarbon.com, comments)
  44. KTaO3-Based Supercurrent Diode
    (31 points, acs.org, comments)
  45. A. J. Ayer – ‘What I Saw When I Was Dead’ (1988)
    (86 points, philosopher.eu, comments)
  46. Minimal Viable Programs (2014)
    (46 points, joearms.github.io, comments)
  47. Blue Origin's rocket reuse achievement marred by upper stage failure
    (74 points, arstechnica.com, comments)
  48. Matt Mullenweg Overrules Core Committers; Puts Akismet on WP 7's Connector List
    (57 points, therepository.email, comments)
  49. I learned Unity the wrong way
    (94 points, darkounity.com, comments)
  50. Hot-wiring the Lisp machine
    (42 points, scheatkode.com, comments)
  51. Eliza a Play by Tom Holloway
    (22 points, mtc.com.au, comments)
  52. Pairwise Order of a Sequence of Elements
    (25 points, morwenn.github.io, comments)
  53. Why Zip drives dominated the 90s, then vanished almost overnight
    (72 points, xda-developers.com, comments)
  54. Russia's doping program is run by the same FSB team that poisoned Navalny
    (96 points, theins.press, comments)
  55. C++26: Reflection, Memory Safety, Contracts, and a New Async Model
    (37 points, infoq.com, comments)
  56. CEOs admit AI had no impact on employment or productivity
    (85 points, fortune.com, comments)
  57. PM Carney declares U.S. ties now a 'weakness' in address to Canadians
    (134 points, ctvnews.ca, comments)
  58. Banned by Anthropic?
    (99 points, bannedbyanthropic.com, comments)
  59. 3D-Printing a Trombone
    (25 points, unnamed.website, comments)
  60. Bipartisan Bill to Tighten Controls on Sensitive Chipmaking Equipment
    (26 points, house.gov, comments)
[syndicated profile] rwitchesvspatriarchy_feed

Posted by /u/Ickis-The-Bunny

This past weekend marked 4 years since starting a really magical journey

Time is a crazy thing from any measure or perspective. 4 years feels like an entire different life from where I am now, and I couldn't even imagine where I would be when I started taking those steps in the direction my intuition told me was correct. Always trust your tummy! I feel like I can watch some of the bigger changes over the years through Ren faire pics, along with my own positive mental health changes, and how I turned a walking stick from a dead friend into a part of my magic. I get to carry their memory while lighting my own path.

Here are some evolutionary pictures of myself and my staff from the last few years at the Ren faire. I added new vials with colored UV paint that flows for a long time after dark, and added UV LEDs as well! There is a small flashlight at the top that can change colors, or be super bright white for walking around the campgrounds or hiking at night! There is also a functional incense burner near the bottom. If you need an excuse to go and enjoy a bunch of queer love and acceptance, find your local faire!

From one gender witch to all witches, thank you for joining this staff meeting, I hope you all brought yours as well to share (Or post)!

submitted by /u/Ickis-The-Bunny
[link] [comments]
[syndicated profile] efforg_feed

Posted by Joe Mullin

Section 230 helps make it possible for online communities to host user speech: from restaurant reviews, to fan fiction, to collaborative encyclopedias. But recent debates about the law often overlook how it works in practice. To mark its 30th anniversary, EFF is interviewing leaders of online platforms about how they handle complaints, moderate content, and protect their users’ ability to speak and share information. 

Reddit is one of the largest user-generated content platforms on the internet, built around thousands of independent communities known as subreddits. Some subreddits cover everyday interests, while others host discussions about specialized or controversial topics. These communities are created and moderated by volunteers, and the site’s decentralized model means that Reddit hosts a vast range of user speech without relying on centralized editorial control. 

Ben Lee is Chief Legal Officer at Reddit, where he oversees the company’s legal strategy and policy work on issues including content moderation and intermediary liability. Before joining Reddit, Lee held senior legal roles at other tech companies including Plaid, Twitter, and Google. At Reddit, he has been closely involved in litigation and policy debates surrounding Section 230, including cases addressing the legal risks faced by platforms and their users and moderators. He was interviewed by Joe Mullin, a policy analyst on EFF's Activism Team.

Joe Mullin: When we talk about user rights and Section 230, what rights are most at stake on a platform like Reddit? 

Ben Lee: Reddit, we often say, is the most human place on the internet. What’s often missing from the debate is that section 230 protects people—not platforms. 

It protects millions of everyday humans and volunteer moderators who participate in online communities. Without it, people could face lawsuits for voting down a post, enforcing community rules, or moderating a discussion. These are foundational activities on Reddit, and frankly, the whole internet.

If you had to describe section 230 to a regular Reddit user without naming the law, what would you say it does for them?

Section 230 protects your ability to participate in community moderation.

Even if all you are doing is up-voting or down-voting content, that’s participation. On Reddit, everyone is a content moderator, through voting. Up-voting determines the visibility of content. 

We believe, strongly, this is one of the only models to allow Reddit to scale. You make the community part of the moderation process. They’re invested in the community, making it better. 

How would user speech be affected if Section 230 were eliminated or weakened? 

We would undermine community self governance—the notion that humans can do content moderation, and take that responsibility for themselves. Whether you’re a small blog or big forum. I like to think of Reddit as composed of this federation of communities that range from the tiny to the humongous. That’s what the internet is! 

The legal risk would discourage people from moderating, or even speaking at all. The kind of speech we’re trying to protect is often critical of powerful people or entities. If a moderation decision leads to litigation from those powerful entities, that’s an expensive proposition to fight. 

Reddit relies on user-run communities and volunteer moderators. Can you walk me through how content moderation and legal complaints actually work in practice, and where section 230 comes into that? 

We have a tiered structure, like our federal system. Each community is like a state: it has its own rules, and enforces them. The vast majority of content moderation decisions are made by the communities, not by Reddit itself. 

Reddit is built on self-governing communities that are moderated by volunteers, supported by automated tools. Section 230 gives Reddit the freedom to experiment, and lets users shape healthy, interest-based spaces.

Section 230 is fundamental to protecting the moderators from a frivolous lawsuit. A screenwriting community might want to protect their community from scammy competitions—and then they get sued by that competition. 

Or a community wants to keep their conversation civil. And, for example, may not allow Star Trek characters to be called “soy boys,” and they enforce that. Then a person sues. 

I wish these were hypotheticals. But they were actual lawsuits. And we have them, routinely. 

What are policymakers missing about Section 230? 

The [moderation] decisions being criticized in court, are decisions to try to make the internet safer. In none of the cases that I mentioned is there a moderator saying, “I want to increase harmful content!” These are good-faith decisions about what makes the internet better. 

Section 230 is, at its core, protecting the ability for people to make those choices for their own communities. 

There's a price to be paid for not having a Section 230. And it will be paid by internet users—not the biggest platforms.

Some see 230 as a way to punish Big Tech. But removing it doesn't punish Big Tech—it makes them more powerful. It's startups, community driven platforms, and individual moderators who rely on Section 230 to compete and innovate. Weakening Section 230 will harm the open internet, and reduce the choice, diversity, and resilience of the internet. 

The big guys, they have armies of lawyers. They have the budget to withstand a flood of lawsuits. Weakening Section 230 just entrenches them. 

In Reddit’s amicus brief in the Gonzalez v. Google Supreme Court case, you point out that without Section 230, many moderation decisions wouldn’t be protected. The brief states: “A plaintiff might claim emotional distress from a truthful but hurtful post that gained prominence when a moderator highlighted it as a trending topic. Or, a plaintiff might claim interference with economic relations arising from an honest but very critical two-star restaurant review.” 

When you have situations where moderators get threats or litigation, what can you do? 

We have had cases where our own moderators got sued, along with us. In the “soy boy” case, we worked to help find pro bono counsel for the moderators. 

Someone posted “Wesley Crusher is a soy boy,” and it got removed. I'm enough of a Star Trek fan that I understand both the reference, and why the moderator decided—“hey, it's gone. I don't want this here.”

This would not violate our Reddit rules. But the community took it down under its own rules about being civil. It was just not a kind-hearted action, and the community had a right to decide. 

But the moderator got sued. We got sued, actually, because the poster disagreed with that moderation choice. Section 230 is what allowed us to win that case. 

These are just average people, implicated only because they moderated their own community. They are trying to do the right thing by their community. 

In cases where litigation happens, when does Section 230 come into play? 

Section 230 is usually one of the first things that's talked about in the case. It’s usually the most effective way of saying: if you believe someone who defamed you—please go to the person who has defamed you. If you’re looking to the moderator, or to Reddit itself, this is not a great way of getting the justice that you seek. 

Is there a different workflow internationally? 

There’s a very different workflow. We had a prominent case in France where a company was trying to sue moderators, and of course, we didn't have section 230 to protect them. So we had to do all sorts of other things to protect them. It got much more complicated. 

The breadth of content that's considered illegal in certain jurisdictions can be somewhat breathtaking. 

Our goal is always to preserve as much freedom of expression as possible for our community. In the U.S., we look at it through the lens of the First Amendment, and other aspects. Outside the U.S., we rely more on the lens of international human rights. 

How would you characterize legal demands around user content, the ones you see most often? 

They tend to be: somebody said something mean about me—take this down. Or someone says: you didn’t allow me to say something mean about someone or some entity. It completely runs the spectrum. 

One law that has already passed that weakens Section 230 is SESTA/FOSTA. From Reddit’s perspective, what changed after that? 

There's some communities we had to shut down, in particular, support communities. There was a cost. Every time Section 230 is narrowed, there’s a cost—some types of speech and communities have a harder time staying online. 

The cost may not seem high to some people, because those communities are not for them. But if they visited them, they’d see that these are actual people, interacting in a positive way. If it wasn’t positive, we have rules for that—but that’s a different question. 

[syndicated profile] efforg_feed

Posted by Matthew Guariglia

In a dramatic middle-of-the-night stand off, a bipartisan set of lawmakers pushing for true reform and privacy protections for Americans bought us some more time to fight! They are holding out for, at a minimum, the requirement of an actual probable cause warrant for FBI access to information collected under the mass spying program known as 702.


A reauthorization with virtually no changes was defeated because a core group of lawmakers held strong; they know that people are hungry for real reform that protects the privacy of our communications. We now have a 10-day extension to continue to push Congress to pass a real reform bill. 


The Lawmakers rallied late Thursday night to reject a proposed amendment that made gestures at privacy protections, but it would not have improved on the status quo and would have reauthorized Section 702 for five more years to boot. 

Take action

TELL congress: 702 Needs Reform

Section 702 is rife with problems, loopholes, and compliance issues that need fixing. The National Security Agency collects full conversations being conducted by and with targets overseas – including by and with Americans in the U.S. –  and stores them in massive databases. The NSA then allows other agencies, including the Federal Bureau of Investigation, to access untold amounts of that information. In turn, the FBI takes a “finders keepers” approach to this data: they reason that since it's already collected under one law, it’s OK for them to see it. 

Under current practice, the FBI can query and even read the U.S. side of that communication without a warrant. What’s more, victims of this surveillance  won’t even know and have very few ways of finding out that their communications have been surveilled. EFF and other civil liberties advocates have been trying for years to know when data collected through Section 702 is used as evidence against them.  

Reforming Section 702 is even more urgent because of revelations hinted at by Senator Ron Wyden’s public statements concerning a “secret interpretation” of the law that enables surveillance of Americans, and a public  “Dear Colleague” letter he sent to fellow Senators about FBI abuse of Section 702. 

That’s right—the way the government conducts mass surveillance is so secret and unaccountable even the way they interpret the law is classified. 

 “In many cases these will be law-abiding Americans having perfectly legitimate, often sensitive, conversations,” Wyden wrote. “These Americans could include journalists, foreign aid workers, people with family members overseas - even women trying to get abortion medication from an overseas provider. Congress has an obligation to protect our country from foreign threats and protect the rights of these and other Americans.” 

We have 10 days to make it clear to Congress: 702 needs real reforms. Not a blanket  reauthorization. Not lip service to change. Real reform.

Take action

TELL congress: 702 Needs Reform

Stop New York's Attack on 3D Printing

Apr. 16th, 2026 08:31 pm
[syndicated profile] efforg_feed

Posted by Rory Mir, Nathan Sheard

New York's proposed 2026-2027 budget currently includes provisions that will require all 3D printers sold in the state to run print-blocking censorware—software that surveils every print for forbidden designs. This policy would also create felony charges for possessing or sharing certain design files. The vote on the state budget could happen as early as next week, so New Yorkers need to act fast and demand that their Assemblymembers and Senators strip this provision from the budget.

Take action

Tell Your Representative to Stand with Creators

State legislators across the US are rushing to regulate 3D-printed firearms under the syllogism something must be done; there, I've done something.” The most reckless of these proposals is a mandate for manufacturers to implement print blocking on all 3D printers. We, and other experts, have already pointed out that this algorithmic print blocking is simply unfeasible and will only serve to stifle competition, free expression, and privacy. While most detrimental to the creative communities lawfully using these printers, every New Yorker will be impacted by this blow to innovation.

This policy is unfortunately buried in Part C of the New York State’s proposed budget for the 2026-2027 fiscal year (S.9005 / A.10005), which is urgently moving toward a vote after facing extensive delays. It’s also bundled with a policy that would allow felony charges to be brought against researchers and journalists for sharing design files restricted by the state.  The worst of these impacts won’t be known until after it is negotiated behind closed doors, with no safeguards for creative expression or privacy.

Researchers and Journalists Could Face Felony Charges

Part C Subpart A of the budget includes two particularly concerning provisions: §2.10 and 2.11. These threaten Class E felony charges for distributing or possessing 3D-printer files that would produce firearm parts with a 3D printer or CNC machine. 

Under these provisions merely sharing a print file with any of them could result in criminal charges

The first provision, 2.10, makes it a felony to sell or distribute files that can produce major firearm components to someone who is not a federally and NY-licensed gunsmith. Under 2.11, it’s also a felony to possess these files if you intend to illegally print a firearm or share them with someone you believe is not permitted to own or smith a firearm.

A journalist reporting on 3D-printed guns. A researcher studying printable firearms. An artist incorporating parts into a new work commenting on gun culture. Under these provisions merely sharing a print file with any of them could result in criminal charges, even if no one involved intends to assemble a firearm.

Criminalizing information doesn’t work. Someone intent on illegally printing a firearm is already subject to charges for that act. Adding felony liability for simply possessing a file or design piles on additional charges while doing nothing to stop printing. New charges for someone distributing these files won’t make them inaccessible to lawbreakers, but they will have a chilling effect on legitimate and entirely legal work. 

Unsurprisingly, a similar law was proposed and subsequently scrapped in Colorado due to First Amendment concerns. We recommend New York do the same.

Take action

Tell Your Representative to Stand with Creators

Mandated Surveillance, Less Access

Part C Subpart B would require every 3D printer and CNC machine sold in New York to include algorithms that scan your design files and block prints the system identifies as producing firearm components. Furthermore, all sales and deliveries of these machines must be made face-to-face. 

Unlike other bills we have seen, there are no exceptions to this mandate. These restrictions apply to sales to researchers, commercial manufacturers, and—oddly enough—federally and state-licensed gunsmiths.

Applying these restrictions to CNC machine sellers is particularly absurd. These cousins of 3D printers, which make 3D objects by removing materials, are often tens of thousands of dollars and used by commercial manufacturers. Automotive, aerospace, medical manufacturers, and many others industries will be subject to the in-person sales, surveillance risk, and all the other problems with these print-blocking algorithms introduce.

Industries will be subject to the in-person sales, surveillance risk, and all the other problems

Even limiting the focus to individual buyers—hobbyists and artists who use these machines at home—this restriction to face-to-face sales comes with its own issues. Beyond unnecessarily complicating the use of printers in the state, this barrier to access will hit rural New Yorkers the hardest. People in rural or remote locations can stand to benefit from the saved time and costs of printing useful parts at home. With this restriction, they will need to drive to one of the few retailers who actually sell this equipment and settle for the models they stock. 

That is, if sellers continue to stock these printers despite the risk. Subpart B §§ 2.3 and 2.5 open sellers up to liability, including anyone on the second-hand market, for selling out-of-date printers. Meanwhile, buyers hoping to illegally print firearms can simply build their own printer with widely available equipment.

The Law Won’t Work as Advertised 

Here’s what makes Subpart B of the New York budget particularly reckless: the technology it mandates is not capable of doing what it is supposed to. 

There is very little detail provided about requirements for the mandated algorithms. What the bill does outline boils down to this: the algorithms must evaluate print files to determine whether they would produce a firearm or illegal firearm parts, and if so, block the print. In an attempt to enable this, New York state would also create and maintain a library of forbidden files with tightly restricted access. 

We’ve already gone over why this idea simply won’t work. Design files are trivially easy to modify, split into segments, or otherwise alter to evade pattern detection. Even if printers fully rendered and analyzed the print with cloud-based AI, any number of design or post-print tricks can be used to dodge detection. Meanwhile, such fuzzy AI interpretation will rapidly increase the percentage of lawful prints censored. 

Firearms aren’t a highly specific design like paper currency; these proposed algorithms are futilely attempting to block an infinite number of designs capable of—or that can be made capable of—the few simple mechanical functions that make up a firearm. 

This group has no peer review requirements, so it could easily be loaded with profiteers or incumbent manufacturers

As we’ve said before: the internet always routes around censorship. Anyone determined to print a prohibited object has straightforward workarounds. The people who get surveilled and blocked are the people trying to follow the law.

The bill aims to enforce this impossible mandate by creating a working group to define the actual technical requirements of enforcement—but only after the law passes. This group has no peer review requirements, so it could easily be loaded with profiteers or incumbent manufacturers who are already lining up to participate. These incumbents stand to profit from shutting out new competitors and locking in users to their devices, and sellers into their platform, subjecting both to the type of enshittification seen with Digital Rights Management (DRM) software. There are also no safeguards in the law to prevent the most surveillance-heavy approaches to print scanning, or to stop this censorship infrastructure from being further weaponized against lawful speech.

On the other hand, unbiased experts in open-source manufacturing in the working group can at best pause the clock by showing such algorithms are unfeasible. That is, until a new snake oil company comes along to restart it. 

New York Won't Be the Last Stop 

New York is one of the largest consumer markets in the country. When it mandates a feature in hardware, manufacturers hardly ever build a New York-only version. They build the New York version and sell it globally. A print-blocking mandate adopted in New York will become the national standard in practice.

New Yorkers deserve more than this rush job buried in a budget bill. This is an unfeasible tech solution, built without the consumer protections that would be required of any serious policy proposal, and creates new costs and inconveniences amidst a protracted annual budget process. It also threatens First Amendment protections. This policy will take shape without consumer guardrails, behind closed doors, and risks the worst outcomes for grassroots innovation and creativity enabled by these machines. Worse still, these practices can become the norm across other states and among 3D-printer manufacturers worldwide. 

Your representatives could vote on this ill-conceived measure in the next week.  If you're a New Yorker, email your legislators now, and tell them to strip this measure from the budget today. 

Take action

Tell Your Representative to Stand with Creators

[syndicated profile] snopes_feed

Posted by Nur Ibrahim

A report from The Atlantic relies on anonymous sources who shared several instances of the FBI director's alleged behavior.
[syndicated profile] snopes_feed

Posted by Emery Winter

The Trump administration is ending the Archdiocese of Miami's charity grant for sheltering and caring for migrant children who enter the U.S. alone.

Rachel Reid, Fiction, & More

Apr. 20th, 2026 03:30 pm
[syndicated profile] smartbitches_feed

Posted by Amanda

Bury Our Bones in the Midnight Soil

Bury Our Bones in the Midnight Soil by V.E. Schwab is $5.99! This came out last summer and was a big release. If you’re still waiting on that library hold, maybe grab this one.

From V. E. Schwab, the #1 New York Times bestselling author of The Invisible Life of Addie LaRue: a new genre-defying novel about immortality and hunger.

This is a story about hunger.
1532. Santo Domingo de la Calzada.
A young girl grows up wild and wily—her beauty is only outmatched by her dreams of escape. But María knows she can only ever be a prize, or a pawn, in the games played by men. When an alluring stranger offers an alternate path, María makes a desperate choice. She vows to have no regrets.

This is a story about love.
1827. London.
A young woman lives an idyllic but cloistered life on her family’s estate, until a moment of forbidden intimacy sees her shipped off to London. Charlotte’s tender heart and seemingly impossible wishes are swept away by an invitation from a beautiful widow—but the price of freedom is higher than she could have imagined.

This is a story about rage.
2019. Boston.
College was supposed to be her chance to be someone new. That’s why Alice moved halfway across the world, leaving her old life behind. But after an out-of-character one-night stand leaves her questioning her past, her present, and her future, Alice throws herself into the hunt for answers . . . and revenge.

This is a story about life—
how it ends, and how it starts.

Add to Goodreads To-Read List →

You can find ordering info for this book here.

 

 

 

Time to Shine

Time to Shine by Rachel Reid is $1.99! I believe this is a standalone contemporary from Reid. Lara reviewed this one and gave it a B:

I read this book in a day. A delicious day. A work day! In between meetings and emails I was reading this book. (Sorry, boss.)

For Landon Stackhouse, being called up from the Calgary farm team is exciting and terrifying, even if, as the backup goalie, he rarely leaves the bench. A quiet loner by nature, Landon knows he gives off strong “don’t talk to me” vibes. The only player who doesn’t seem to notice is Calgary’s superstar young winger, Casey Hicks.

Casey treats Landon like an old friend, even though they’ve only interacted briefly in the past. He’s endlessly charming and completely laid-back in a way that Landon absolutely can’t relate to. They couldn’t have less in common, but Landon needs a place to live that’s not a hotel room and Casey has just bought a massive house—and hates being alone.

As roommates, Casey refuses to be defeated by Landon’s one-word answers. As friends, Landon comes to notice a few things about Casey, like his wide, easy smile and sparkling green-blue eyes. Spending the holidays together only intensifies their bromance-turned-romance. But as the new year approaches, the countdown to the end of Landon’s time in Calgary is on.

Add to Goodreads To-Read List →

You can find ordering info for this book here.

 

 

 

Wild Dark Shore

Wild Dark Shore by Charlotte McConaghy is $2.99! I hesitate to call this one “recommended” but we had an amazing guest post about this book. Be warned, it will definitely make you cry.

From the beloved, New York Times bestselling author of Migrations and Once There Were Wolves, a novel about a family living alone on a remote island, when a mysterious woman washes up on shore

A family on a remote island. A mysterious woman washed ashore. A rising storm on the horizon.

Dominic Salt and his three children are caretakers of Shearwater, a tiny island not far from Antarctica. Home to the world’s largest seed bank, Shearwater was once full of researchers. But with sea levels rising, the Salts are now its final inhabitants, packing up the seeds before they are transported to safer ground. Despite the wild beauty, isolation has taken its toll on the Salts. Raff, eighteen and suffering his first heartbreak, can only find relief at his punching bag; Fen, seventeen, has started spending her nights on the beach among the seals; nine-year-old Orly, obsessed with botany, fears the loss of his beloved natural world; and Dominic can’t stop turning back toward the past, and the loss that drove the family to Shearwater in the first place.

Then, during the worst storm the island has ever seen, a woman washes up on shore. As the Salts nurse the woman, Rowan, back to life, their suspicion gives way to affection, and they finally begin to feel like a family again. Rowan, long accustomed to protecting her heart, begins to fall for the Salts, too. But Rowan isn’t telling the whole truth about why she set out for Shearwater. And when she discovers the sabotaged radios and a freshly dug grave, she realizes Dominic is keeping his own dark secrets. As the storms on Shearwater gather force, can they trust each other enough to protect one another—and the precious seeds in their care? And can they finally put the tragedies of the past behind them to create something new, together?

A novel of heart-stopping twists, dizzying beauty, and ferocious love, Wild Dark Shore is about the impossible choices we make to protect the people we love, even as the world around us is ending.

Add to Goodreads To-Read List →

You can find ordering info for this book here.

 

 

 

Book People

Book People by Jackie Ashenden is 99c! This contemporary romance was mentioned on Hide Your Wallet. This was also our third bestselling book of last year.

Don’t miss this utterly charming, spicy, enemies-to-lovers rom-com from Jackie Ashenden!

When Kate, a fledgling bookseller, decides to open a bookshop that celebrates the kinds of genre fiction she loves to read (popular and fun!), she’s surprised to find that not everyone in the town is as excited as she is.

Least excited of all? Sebastian, owner of the highbrow bookshop across the road, who has rules for everything: the kind of books he sells, the clothes he wears, and the people he dates (no-one local).

When the pair find themselves working together on the town’s literary festival, their growing attraction becomes harder and harder to ignore. Professional rivalry aside, just one steamy kiss can’t mean anything, can it?

Add to Goodreads To-Read List →

You can find ordering info for this book here.

 

 

 

librarymonster: "Library Monster.io" in old movie monster font. There's a comic book-style fem face with glasses & underbite fangs. (Default)
Linnea the Library Monster Blog

Page Summary