Uh, are they admitting they are trying to circumvent technological protections setup to restrict access to a system?
Isn’t that a literal computer crime?
No-no, see. When an AI-first company does it, it’s actually called courageous innovation. Crimes are for poor people
See: Facebook/Meta
puts on evil hat CloudFlare should DRM their protection then DMCA Perplexity and other US based “AI” companies to oblivion. Side effect, might break the Internet.
Worth it.
The Internet was already ruined, cloudflare is just bandaids on top of band aids.
Perplexity argues that a platform’s inability to differentiate between helpful AI assistants and harmful bots causes misclassification of legitimate web traffic.
So, I assume Perplexity uses appropriate identifiable user-agent headers, to allow hosters to decide whether to serve them one way or another?
yeah it’s almost like there as already a system for this in place
THE CAKE DAY IS NOW. (i dont have an image at hand)
i really wish we wouldn’t do those. feels too reddity.
but thanks.
as you wish
*monkeys paw curls and i turn into cake*
And I’m assuming if the robots.txt state their UserAgent isn’t allowed to crawl, it obeys it, right? :P
No, as per the article, their argumentation is that they are not web crawlers generating an index, they are user-action-triggered agents working live for the user.
Except, it’s not a live user hitting 10 sights all the same time, trying to crawl the entire site… Live users cannot do that.
That said, if my robots.txt forbids them from hitting my site, as a proxy, they obey that, right?
Its not up to the hoster to decide whom to serve content. Web is intended to be user agent agnostic.
I don’t like cloudflare but it’s nice that they allow people to stop AI scrapping if they want to
CloudFlare has become an Internet protection racket and I’m not happy about it.
It’s been this from the very beginning. But they don’t fit the definition of a protection racket as they’re not the ones attacking you if you don’t pay up. So they’re more like a security company that has no competitors due to the needed investment to operate.
Cloudflare are notorious for shielding cybercrime sites. You can’t even complain about abuse of Cloudflare about them, they’ll just forward on your abuse complaint to the likely dodgy host of the cybercrime site. They don’t even have a channel to complain to them about network abuse of their DNS services.
So they certainly are an enabler of the cybercriminals they purport to protect people from.
Any internet service provider needs to be completely neutral. Not only in their actions, but also in their liability.
Same goes for other services like payment processors.
If companies that provide content-agnostic services are allowed to policy the content, that opens the door to really nasty stuff.You can’t chop everyone’s arms to stop a few people from stealing.
If they think their services are being used in a reprehensible manner, what they need to do is alert the authorities, not act like vigilantes.
If they acted differently, they’d probably be liable for illegal activity that they proxy for (this is for example relevant for the DMCA safe harbor).
Anyhow, when on their abuse page, I have an option for “Registrar”, which is used for “DNS abuse”, among others.
they’re good at protecting websites but damn, having a company being MITM feels so wrong
The shit they know. Plus their support for non-JS users or For are pure shite
Yeah, a few sites outright refuse to work because cloudflare just poops. EDIT: It was supposed to say “loops”, but I’m keeping it.
You could say they are… Perplexed.
Can’t believe I’ve lived to see Cloudflare be the good guys
They’re not. They’re using this as an excuse to become paid gatekeepers of the internet as we know it. All that’s happening is that Cloudflare is using this to menuever into position where they can say “nice traffic you’ve got there - would be a shame if something happened to it”.
AI companies are crap.
What Cloudflare is doing here is also crap.
And we’re cheering it on.
Lesser of two bad guys maybe?
they cant get their ai to check a box that says “I am not a robot”? I’d think thatd be a first year comp sci student level task. And robots.txt files were basically always voluntary compliance anyway.
Cloudflare actually fully fingerprints your browser and even sells that data. Thats your IP, TLS, operating system, full browser environment, installed extensions, GPU capabilities etc. It’s all tracked before the box even shows up, in fact the box is there to give the runtime more time to fingerprint you.
Yeah and the worst part is it doesn’t fucking work for the one thing it’s supposed to do.
The only thing it does is stop the stupidest low effort scrapers and forces the good ones to use a browser.
Recaptcha v2 does way more than check if the box was checked.
you’re not wrong, but it also allows more than 99.8% of the bot traffic through too on text challenges. Its like the TSA of website security. Its mostly there to keep the user busy while cloudflare places itself in a man in the middle of your encrypted connection to a third party. The only difference between cloudflare and a malicious attacker is cloudflares stated intention not to be evil. With that and 3 dollars I can buy myself a single hard shell taco from tacobell.
Here comes the ridiculous offer to buy Google chrome with money they don’t have: easy delicious scraping directly from the user source
Gee that’s a real removed it ain’t it perplexity?
I don’t see a problem here. Maybe Perplexity should consider the reasons WHY Cloudflare have a firewall…?
They do have a point though. It would be great to let per-prompt searches go through, but not mass scrapping
I believe a lot of websites don’t want both though
Does it not need to be scraped to be indexed, assuming it’s semi-typical RAG stuff?
I assume their script does some search engine stuff like query google or bing and then “scrap” the links they go on
Some selenium stuff
deleted by creator
It’s insane that anyone would side with Cloudflare here. To this day I cant visit many websites like nexusmods just because I run Firefox on Linux. The Cloudflare turnstile just refreshes infinitely and has been for months now.
Cloudflare is the biggest cancer on the web, fucking burn it.
Linux and Firefox here. No problem at all with Cloudflare, despite having more or less as much privacy preserving add-on as possible. I even spoof my user agent to the latest Firefox ESR on Linux.
Something’s may be wrong with your setup.
I suspect a lot of it comes down to your ISP. Like the original commentor I also frequently can’t pass CloudFlare turnstile when on Wifi, although refreshing the page a few times usually gets me through. Worst case on my phone’s hotspot I can much more consistently pass. It’s super annoying and combined with their recent DNS outage has totally ruined any respect I had for CloudFlare.
Interesting video on the subject: https://youtu.be/SasXJwyKkMI
Thats not how it works. Cf uses thousands of variables to estimate a trust score and block people so just because it works for you doesn’t mean it works.
Same goes the other way. It’s not because it doesn’t work for you that it should go away.
That technology has its uses, and Cloudflare is probably aware that there are still some false positive, and probably is working on it as we write.
The decision is for the website owner to take, taking into consideration the advantages of filtering out a majority of bots and the disadvantages of loosing some legitimate traffic because of false positives. If you get Cloudflare challenge, chances are that he chosed that the former vastly outclass the later.
Now there are some self-hosted alternatives, like Anubis, but business clients prefer SaaS like Cloudflare to having to maintain their own software. Once again it is their choices and liberty to do so.
lmao imagine shilling for corporate Cloudflare like this. Also false positive vs false negative are fundamentally not equal.
Cloudflare is probably aware that there are still some false positive, and probably is working on it as we write.
The main issue with Cloudflare is that it’s mostly bullshit. It does not report any stats to the admins on how many users were rejected or any false positive rates and happily put’s everyone under “evil bot” umbrella. So people from low trust score environments like Linux or IPs from poorer countries are under significant disadvantage and left without a voice.
I’m literally a security dev working with Cloudflare anti-bot myself (not by choice). It’s a useful tool for corporate but a really fucking bad one for the health of the web, much worse than any LLM agent or crawler, period.
So people from low trust score environments like Linux
Linux user here, Cloudflare hasn’t blocked access to a single page for me unless I use a VPN, which then can trigger it.
Ah, the good old “you dont agree with me so you must be shilling for X” argument. I suppose you are shilling for the bots then, am I right ?
I’m on Linux with Firefox and have never had that issue before (particularly nexusmods which I use regularly). Something else is probably wrong with your setup.
Thirded. All three (Linux, FF, nexus)
ZERO ISSUES.
In my case, it’s usually the VPN.
“Wrong with my setup” - thats not how internet works.
I’m based in south east asia and often work on the road so IP rating probably is the final crutch in my fingerprint score.
Either way this should be no way acceptible.
That is exactly how the internet works. That’s always how the internet has worked.
It happened to me before until I did a Google search. It was my VPN web protection. It was too " over protective".
Check your security settings, antivirus and VPN
omg ur a hacker
Did you mean Edge on Windows? 'Cause if so, welcome in!
I’ve developed my own agent for assisting me with researching a topic I’m passionate about, and I ran into the exact same barrier: Cloudflare intercepts my request and is clearly checking if I’m a human using a web browser. (For my network requests, I’ve defined my own user agent.)
So I use that as a signal that the website doesn’t want automated tools scraping their data. That’s fine with me: my agent just tells me that there might be interesting content on the site and gives me a deep link. I can extract the data and carry on my research on my own.
I completely understand where Perplexity is coming from, but at scale, implementations like
thisPerplexity’s are awful for the web.(Edited for clarity)
I hate to break it to you but not only does Cloudflare do this sort of thing, but so does Akamai, AWS, and virtually every other CDN provider out there. And far from being awful, it’s actually protecting the web.
We use Akamai where I work, and they inform us in real time when a request comes from a bot, and they further classify it as one of a dozen or so bots (search engine crawlers, analytics bots, advertising bots, social networks, AI bots, etc). It also informs us if it’s somebody impersonating a well known bot like Google, etc. So we can easily allow search engines to crawl our site while blocking AI bots, bots impersonating Google, and so on.
What I meant with “things like this are awful for the web,” I meant that automation through AI is awful for the web. It takes away from the original content creators without any attribution and hits their bottom line.
My story was supposed to be one about responsible AI, but somehow I screwed that up in my summary.
When a firm outright admits to bypassing or trying to bypass measures taken to keep them out, you think that would be a slam dunk case of unauthorized access under the CFAA with felony enhancements.
Fuck that. I don’t need prosecutors and the courts to rule that accessing publicly available information in a way that the website owner doesn’t want is literally a crime. That logic would extend to ad blockers and editing HTML/js in an “inspect element” tag.
That logic would not extend to ad blockers, as the point of concern is gaining unauthorized access to a computer system or asset. Blocking ads would not be considered gaining unauthorized access to anything. In fact it would be the opposite of that.
gaining unauthorized access to a computer system
And my point is that defining “unauthorized” to include visitors using unauthorized tools/methods to access a publicly visible resource would be a policy disaster.
If I put a banner on my site that says “by visiting my site you agree not to modify the scripts or ads displayed on the site,” does that make my visit with an ad blocker “unauthorized” under the CFAA? I think the answer should obviously be “no,” and that the way to define “authorization” is whether the website puts up some kind of login/authentication mechanism to block or allow specific users, not to put a simple request to the visiting public to please respect the rules of the site.
To me, a robots.txt is more like a friendly request to unauthenticated visitors than it is a technical implementation of some kind of authentication mechanism.
Scraping isn’t hacking. I agree with the Third Circuit and the EFF: If the website owner makes a resource available to visitors without authentication, then accessing those resources isn’t a crime, even if the website owner didn’t intend for site visitors to use that specific method.
Site owners currently do and should have the freedom to decide who is and is not allowed to access the data, and to decide for what purpose it gets used for. Idgaf if you think scraping is malicious or not, it is and should be illegal to violate clear and obvious barriers against them at the cost of the owners and unsanctioned profit of the scrapers off of the work of the site owners.
to decide for what purpose it gets used for
Yeah, fuck everything about that. If I’m a site visitor I should be able to do what I want with the data you send me. If I bypass your ads, or use your words to write a newspaper article that you don’t like, tough shit. Publishing information is choosing not to control what happens to the information after it leaves your control.
Don’t like it? Make me sign an NDA. And even then, violating an NDA isn’t a crime, much less a felony punishable by years of prison time.
Interpreting the CFAA to cover scraping is absurd and draconian.
If you want anybody and everyone to be able to use everything you post for any purpose, right on, good for you, but don’t try to force your morality on others who rely on their writing, programming, and artworks to make a living to survive.
I’m gonna continue to use ad blockers and yt-dlp, and if you think I’m a criminal for doing so, I’m gonna say you don’t understand either technology or criminal law.
When sites put challenges like Anubis or other measures to authenticate that the viewer isn’t a robot, and scrapers then employ measures to thwart that authentication (via spoofing or other means) I think that’s a reasonable violation of the CFAA in spirit — especially since these mass scraping activities are getting attention for the damage they are causing to site operators (another factor in the CFAA, and one that would promote this to felony activity.)
The fact is these laws are already on the books, we may as well utilize them to shut down this objectively harmful activity AI scrapers are doing.
That same logic is how Aaron Swartz was cornered into suicide for scraping JSTOR, something widely agreed to be a bad idea by a wide range of lawspeople including SCOTUS in its 2021 decision Van Buren v. US that struck this interpretation off the books.
The fact is these laws are already on the books, we may as well utilize them to shut down this objectively harmful activity AI scrapers are doing.
Silly plebe! Those laws are there to target the working class, not to be used against corporations. See: Copyright.
Nah, that would also mean using Newpipe, YoutubeDL, Revanced, and Tachiyomi would be a crime, and it would only take the re-introduction of WEI to extend that criminalization to the rest of the web ecosystem. It would be extremely shortsighted and foolish of me to cheer on the criminalization of user spoofing and browser automation because of this.
Do you think DoS/DDoS activities should be criminal?
If you’re a site operator and the mass AI scraping is genuinely causing operational problems (not hard to imagine, I’ve seen what it does to my hosted repositories pages) should there be recourse? Especially if you’re actively trying to prevent that activity (revoking consent in cookies, authorization captchas).
In general I think the idea of “your right to swing your fists ends at my face” applies reasonably well here — these AI scraping companies are giving lots of admins bloody noses and need to be held accountable.
I really am amenable to arguments wrt the right to an open web, but look at how many sites are hiding behind CF and other portals, or outright becoming hostile to any scraping at all; we’re already seeing the rapid death of the ideal because of these malicious scrapers, and we should be using all available recourse to stop this bleeding.
DoS attacks are already a crime, so of course the need for some kind of solution is clear. But any proposal that gatekeeps the internet and restricts the freedoms with which the user can interact with it is no solution at all. To me, the openness of the web shouldn’t be something that people just consider, or are amenable to. It should be the foundation in which all reasonable proposals should consider as a principle truth.
If I put a banner on my site that says “by visiting my site you agree not to modify the scripts or ads displayed on the site,” does that make my visit with an ad blocker “unauthorized” under the CFAA?
How would you “authorize” a user to access assets served by your systems based on what they do with them after they’ve accessed them? That doesn’t logically follow so no, that would not make an ad blocker unauthorized under the CFAA. Especially because you’re not actually taking any steps to deny these people access either.
AI scrapers on the other hand are a type of users that you’re not authorizing to begin with, and if you’re using CloudFlares bot protection you’re putting into place a system to deny them access. To purposefully circumvent that access would be considered unauthorized.
That doesn’t logically follow so no, that would not make an ad blocker unauthorized under the CFAA.
The CFAA also criminalizes “exceeding authorized access” in every place it criminalizes accessing without authorization. My position is that mere permission (in a colloquial sense, not necessarily technical IT permissions) isn’t enough to define authorization. Social expectations and even contractual restrictions shouldn’t be enough to define “authorization” in this criminal statute.
To purposefully circumvent that access would be considered unauthorized.
Even as a normal non-bot user who sees the cloudflare landing page because they’re on a VPN or happen to share an IP address with someone who was abusing the network? No, circumventing those gatekeeping functions is no different than circumventing a paywall on a newspaper website by deleting cookies or something. Or using a VPN or relay to get around rate limiting.
The idea of criminalizing scrapers or scripts would be a policy disaster.
You say, just as news breaks that the top German court has over turned a decision that declared “AD blocking isn’t piracy”
Unauthorized access into a computer system and “Piracy” are two very different things.
Please instruct me on how I go to the timeline where the legal system always makes decisions based on logic, reasoning, evidence and fairness and not…the opposite…of all those things
You have a lot of trust placed in the courts to actually do the right thing
I’m not saying courts couldn’t pass a new law saying whatever they want. But the laws we have today would not allow for ad blocking to be considered unauthorized access. Not under the CFAA as mentioned.
I said “The logic would not extend to that” not that a legal system could not act illogically.
The original comment reply to you was all about how the legal system would act, that’s the primary concern. All it would take is a Trump loyalist judge, a Trump leaning appeals court and the right-wing Supreme Court and boom suddenly the CFAA covers a whole lot more than what was “logical”
Ehhhh, you are gaining access to content due to assumption you are going to interact with ads and thus, bring revenue to the person and/or company producing said content. If you block ads, you remove authorisation brought to you by ads.
Carefull, this way even not looking at an ads positioned at the bottom of the page (or anyway not visible without scrolling) would mean to remove authorisation brought to you by ads.
That doesn’t make any logical sense. You cant tie legal authorization to an unsaid implicit assumption, especially when that is in turn based on what you do with the content you’ve retrieved from a system after you’ve accessed and retrieved it.
When you access a system, are you authorized to do so, or aren’t you? If you are, that authorization can’t be retroactively revoked. If that were the case, you could be arrested for having used a computer at a job, once you’ve quit. Because even though you were authorized to use it and your corporate network while you worked there, now that you’ve quit and are no longer authorized that would apply retroactively back to when you DID work there.
There was no header on the request saying I want ads though
They already prosecute people under the unauthorized access provision. They just don’t prosecute rich people under it.
They prosecuted and convicted a guy under the CFAA for figuring out the URL schema for an AT&T website designed to be accessed by the iPad when it first launched, and then just visiting that site by trying every URL in a script. And then his lawyer (the foremost expert on the CFAA) got his conviction overturned:
https://www.eff.org/cases/us-v-auernheimer
We have to maintain that fight, to make sure that the legal system doesn’t criminalize normal computer tinkering, like using scripts or even browser settings in ways that site owners don’t approve of.
Right? Isn’t this a textbook DMCA violation, too?
for us, not for them. wait until they argue in court that actually its us at fault and we need to provide access or else