How do you feel about your content getting scraped by AI models?

lemmy.dbzer0.com/pictrs/image/4a262eff-3643-4c3…

submitted by llama@lemmy.dbzer0.com

edited

How do you feel about your content getting scraped by AI models?

I created this account two days ago, but one of my posts ended up in the (metaphorical) hands of an AI powered search engine that has *scraping* capabilities. What do you guys think about this? How do you feel about your posts/content getting scraped off of the web and potentially being used by AI models and/or AI powered tools? Curious to hear your experiences and thoughts on this.


Prompt Update

The prompt was something like, *What do you know about the user [email protected] on Lemmy? What can you tell me about his interests?*" Initially, it generated a lot of fabricated information, but it would still include one or two accurate details. When I ran the test again, the response was much more accurate compared to the first attempt. It seems that as my account became more established, it became easier for the crawlers to find relevant information.

*It even talked about this very post on item 3 and on the second bullet point of the "Notable Posts" section.*

For more information, check this comment.


Edit¹: This is Perplexity. Perplexity AI employs data scraping techniques to gather information from various online sources, which it then utilizes to feed its large language models (LLMs) for generating responses to user queries. The scraping process involves automated crawlers that index and extract content from websites, including articles, summaries, and other relevant data. It is an advanced conversational search engine that enhances the research experience by providing concise, sourced answers to user queries. It operates by leveraging AI language models, such as GPT-4, to analyze information from various sources on the web. (12/28/2024)

Edit²: One could argue that data scraping by services like Perplexity may raise privacy concerns because it collects and processes vast amounts of online information without explicit user consent, potentially including personal data, comments, or content that individuals may have posted without expecting it to be aggregated and/or analyzed by AI systems. One could also argue that this indiscriminate collection raise questions about data ownership, proper attribution, and the right to control how one's digital footprint is used in training AI models. (12/28/2024)

Edit³: I added the second image to the post and its description. (12/29/2024).

269

Log in to comment

103 Comments

the fediverse is largely public. so i would only put here public info. ergo, i dont give a shit what the public does with it.

Do you own your own words?

I don't think it's unreasonable to be uneasy with how technology is shifting the meaning of what public is. It used to be walking the dog meant my neighbors could see me on the sidewalk while I was walking. Now there are ring cameras, etc. recording my every movement and we've seen that abused in lots of different ways.

The internet has always been a grand stage, though. We're like 40 years into this reality at this point.

I think people who came-of-age during Facebook missed that memo, though. It was standard, even explicitly recommended to never use your real name or post identifying information on the internet. Facebook kinda beat that out of people under the guise of "only people you know can access your content, so it's ok". People were trained into complacency, but that doesn't mean the nature of the beast had ever changed.

People maybe deluded themselves that posting on the internet was closer to walking their dog in their neighbourhood than it was to broadcasting live in front of international film crews, but they were (and always have been) dead wrong.

We're like 40 years into this reality at this point.

We are not 40 years into everyone's every action (online and, increasingly, even offline via location tracking and facial recognition cameras) being tracked, stored in a database, and analyzed by AI. That's both brand new and *way* worse than even what the pre-Facebook "don't use your real name online" crowd was ever warning about.

I mean, yes, back in the day it was understood that the stuff you actively write and post on Usenet or web forums might exist forever (the latter, assuming the site doesn't get deleted or at least gets archived first), but (a) that's still only stuff you actively chose to share, and (b) at least at the time, it was mostly assumed to be a person actively searching who would access it -- that retrieving it would take a modicum of effort. And even *that* was correctly considered to be a great privacy risk, requiring vigilance to mitigate.

These days, having an entire industry dedicated to actively stalking every user for every passive signal and scrap of metadata they can possibly glean, while moreover the users themselves are much more "normie"/uneducated about the threat, is materially *even worse* by a wide margin.

Our choices regarding security and privacy are always compromises. The uneasy reality is that new tools can change the level of risk attached to our past choices. People may have been OK with others seeing their photos but aren't comfortable now that AI deep fakes are possible. But with more and more of our lives being conducted in this space, do even knowledgable people feel forced to engage regardless?

People think there are only two categories, private and public, but there are now actually three: private, public, and panopticon.

I couldn't agree more!

But what if a shitposting AI posts all the best takes before we can get to them.

Is the world ready for High Frequency Shitposting?

Is the world ready for High Frequency Shitposting?

The lemmy world? Not at all. Instances have no automated security mechanisms. The mod system consisting mostly of self important ***'s would break down like straw. Users cannot hold back, but would write complaints in exponential numbers, or give up using lemmy within days...

Deleted by author

reply
1
, edited

I run my own instance and have a long list of user agents I flat out block, and that includes all known AI scraper bots.

That only prevents them from scraping from my instance, though, and they can easily scrape my content from any other instance I've interacted with.

Basically I just accept it as one of the many, *many* things that sucks about the internet in 2024, yell "*Serenity Now!*" at the sky, and carry on with my day.

I do wish, though, that other instances would block these LLM scraping bots but I'm not going to avoid any that don't.

you might be interested to know that UA blocking is not enough: https://feddit.bg/post/13575

the main thing is in the comments

by [deleted]

Deleted by author

reply
38

I understand that Perplexity employs various language models to handle queries and that the responses generated may not directly come from the training data used by these models; since a significant portion of the output comes from what it *scraped* from the web. However, a significant concern for some individuals is the potential for their posts to be scraped and also used to train AI models, hence my post.

I'm not anti AI, and, I see your point that transformers often dissociate the content from its creator. However, one could argue this doesn't fully mitigate the concern. Even if the model can't link the content back to the original author, it's still using their data without explicit consent. The fact that LLMs might hallucinate or fail to attribute quotes accurately doesn't resolve the potential plagiarism issue; instead, it highlights another problematic aspect of these models imo.

If there was only some way to make any attempts at building an accurate profile of one's online presence via data scraping completely useless by masking one's own presence within the vast quantity of online data of someone else, let's say for example, a famous public figure.

But who would do such a thing?

, edited

Can't wait for someone to ask an LLM "Hey tell me what Margot Robbie's interests are" only for it to respond "Margot Robbie is a known supporter of free software, and a fierce proponent of beheading CEOs".

As I live and breathe, it's the famous Margot Robbie herself!

There are at least one or two Lemmy users who add a CC or non-AI license footer to their posts. Not that it’s do anything, but it might be fun to try and get the LLM to admit it’s illegally using your content.

It'd be hilarious if the model spat out the non-AI license footer in response to a prompt.

I did tell one of them a few months ago that all they’re going to do is train the AI that sometimes people end their posts with useless copyright notices. It doesn’t understand anything. But superstitious monkeys gonna be superstitious monkeys.

Sadly it hasn’t been proven in court yet that copyright even matters for training AI.

And we damn well know it doesn’t for Chinese AI models.

Don't give me any ideas now >:)

Those... don't hold any weight lol. Once you post on any website, you hand copyright over to the website owner. That's what gives them permission to relay your message to anyone reading the website. Copyright doesn't do anything to restrict readers of the content (I.e. model trainers). Only publishers.

I’m pretty much fine with AIs scraping my data. What they can see is public knowledge and was already being scraped by search engines.

I object to: - sites like Reddit whose entire existence is due to user content, deciding they can police and monetize my content. They have no right - sharing of data, which includes more personal and identifiable data - whatever the AI summarizes me as being treated as fact, such as by a company hr, regardless of context, accuracy, hallucinations

What did you mean by "police" your content?

Probably not the right word, but my content should still be my content. I offered it to Reddit but that doesn’t mean they have the right to charge others for it or restrict it to others for commercial reasons.

Not the person you are replying to but Reddit does not make the content you created available for everyone (blocking crawlers, removing the free API) but instead sells it to the highest bidder.

Right, that’s my objection. After benefitting from my content, they police it, as in restrict other sites from seeing it, until it’s monetized. It’s not Reddits to charge money for

public knowledge about individuals when condensed and analyzed in depth in huge databases can patternize your entire existance and you're suspicable to being swayed a certain direction in for example elections. Creating further divide and into someone elses pockets.

, edited

Maybe but I can’t object too much if I put my content out in public. When forced to create an account I use minimal/false information and a unique generated email. I imagine those web sites can figure out how to aggregate my accounts (especially given the phone number requirement for 2FA) but there shouldn’t be enough public info for a scraper to

Gotta think larger than yourself though. What happens when your spouse uses real info? your kids? your parents? they'll shadowplay your person with great accuracy and fill in the gaps. You don't even have to "put content" out there. Said databases can just put two and two together. How will you, or other uses even know you're actually talking to a human? perhaps you're on Lemmy and we're all bots trying to get you to admit fragments of your latest crimes in order to get you into jail for said crime? etcetera. At first glance this all looks harmless but any accumulated information in huge databases is a major infringement to personal integrety at best; and complete control of your freedom at worst. The ultimate power is when someone can make you do X or Y and you don't even realize you're doing their bidding; but believe you have a choice when you don't. (Similiar to how it is in my living situation at home with my gf that is :P jk.)

Hakuna matata. Happy new year

I completely agree, except that I think of them as multiple related privacy issues. In the scope of ai bots scraping my public content, most of these are out of scope

sites like Reddit whose entire existence is due to user content, deciding they can police and monetize my content. They have no right

Um, not they do in fact have "every right" here. It's shitty of course but you explicitly gave them that right in form of an perpetual, irrevocable, world-wide etc. license to do whatever they like to everything you publish on their site.

They also have every right to "police" your content, especially if it's objectionable. If you post vile shit, trolling or other societal garbage behaviour on the internet, nobody wants to see it.

As with any public forum, by putting content on Lemmy you make it available to the world at large to do basically whatever they want with. I don’t like AI scrapers in general, but I can’t reasonably take issue with this.

I don't like it, that's why I like to throw in just a cup or two of absolute bullshit with just a pinch of cilantro. then top it off with a firm jiggle to get that last drop out from the tip.

I couldn't even imagine speaking like this at first, but once you get used to it the firmness just slides right in and gives you a sense of fulfillment that you can't find anywhere else but home.

When the cows come home to roost, you know it's time to hang up your hat, take off your pants, and slide on the ice.

Everything on the fediverse is usually pseudonymous but public. That's why it would be good for people to read up a little on differential privacy. Not necessarily too much theory, but the basics and the practical implications, like here or here.

Basically, the more messages you post on a single account, the more specific your whole profile is to you, even if you don't post strictly identifying information. That's why you can share one personal story, and have it not compromise your privacy too much by altering it a little. But if you keep posting general things about your life, it will eventually be so specific it can be nobody but you.

What you do with this is up to you. Make throwaway accounts, have multiple accounts, restrict the things you talk about. Or just be conscious that what you are posting is public. That's my two cents.

you can also modify your information or outright lie. Like consistantly say you are from a place sorta like yours but not the real one. city in the next state over or whatever.

No matter how I feel about it, it's one of those things I know I will never be able to do a fucking thing about, so all I can do is accept it as the new reality I live in.

I've been thinking for a while about how a text-oriented website would work if all the text in the database was rendered as SVG figures.

Not very friendly to the disabled?

Aside from that. Accessibility standards are hardly considered even now and I'd rather install a generated audio version option with some audio poisoning to mess with the AIs listening to it.

nothing I can do about it. But I can occasionally spew bullshit so that the AI has no idea what it's doing as well. Fire hydrants were added to Minecraft in 1.16 to combat the fires in the updated nether dimension.

How do you feel about your content getting scraped by AI models?

I think famous Hollywood actress Margot Robbie summed my thoughts up pretty well.

I don't like it, but I accept it as inevitable.

I wouldn't say I go online with the intent of deceiving people, but I think it's important in the modern day to seed in knowingly false details about your life, demographics, and identity here and there to prevent yourself from being doxxed online by AI.

I don't care what the LLMs know about me if I am not actually a real person, even if my thoughts and ideas are real.

Hey, I know her, I'm pretty sure she's in that one movie I watched!

, edited

I don't like it, as I don't like this technology and I don't like the people behind it. On my personal website I have banned all AI scrapers I can identify in robots.txt, but I don't think they care much.\

I can't be bothered adding a copyright signature in social media, but as far as I'm concerned everything I ever publish is CC BY-NC. AI does not give credit and it is commercial, so that's a problem. And I don't think the fact that something is online gives everyone the automatic right to do whatever the fuck they want with it.

, edited

Well your handle *is* the mascot for the open LLM space…

Seriously though, why care? What we say in public is public domain.

It reminds me of people on NexusMods getting in a fuss over “how” people use the mods they publicly upload, or open source projects imploding over permissive licenses they picked… Or Ao3 having a giant fuss over this very issue, and locking down what’s supposed to be a public archive.

I can hate entities like OpenAI all I want, but anything I put out there is fair game.

Oh, no. I don't dislike it, but I also don't have strong feelings about it. I'm just interested in hearing other people's opinions; I believe that if something is public, then it is indeed public.

I think this is inevitable, which is why we (worldwide) need laws where if a model scrapes public data should become open itself as well.

Whatever you put on public domain without explicit license, it becomes CC-0 equivalent. So while it feels violating, it's perfectly fine. The best opsec should be separating your digital identities and also your physical life if you don't want it to be aggregated in the same way. These technologies (scraping) have been around for a while and along with llm's will stay for quite sometime in future, there's no way around them.

PS: you, here, is generic you, not referring to OP.

This is yet another reason why 2FA over phone is a bad idea. I create every account with a unique generated email, a unique generated password and minimal/random personal data. I’m finally at a place where it’s convenient to create accounts with no obvious connection ….. but I only have one phone number. They say it’s for account security, but I wouldn’t be surprised if it’s mainly for data aggregation

Yes that is absolutely annoying and I hardly use such online services other than those I have to like bank, any government services, package delivery/ridebooking etc where in app call doesn't exist and calling is necessary sometimes and some healthcare.

Sometimes they do it to reduce throwaway/inactive accounts as (npn voip) phone numbers are harder to get at scale than email ids. But ironically, some countries have law requiring them to keep the logs so it might be used to connect identity against one's will, say, by law enforcement.

Whatever you put on public domain without explicit license, it becomes CC-0 equivalent.

What does "putting on public domain" mean to you? The way you say that sounds a little weird to me, like there is a misunderstanding here.

Dedicating copyrighted material to the public domain is a deliberate action in some jurisdictions, and impossible in others (like mine, Switzerland). Just publishing a text you wrote for public consumption is something different. That doesn't affect your copyright at all. Unless you have an agreement with the publisher that you grant them a license to use your text by posting it to their website.

, edited

I'm not talking about giving up copyright to content. CC-0 means waiving any as much rights as possible legally, which depends on jurisdiction.

and impossible in others (like mine, Switzerland)

I couldn't find anything about default license of publicly available material in your country, nor about the impossibility you mentioned by basic web search. I'm genuinely interested to read about it so do share sources if you can.

Btw there is a FEP and some discussions that talks exactly about the issue you mentioned in the root post.

Edit: formatting.

, edited

I’m not talking about giving up copyright to content.

Hm, once again I don't understand your meaning, sorry. The public domain in my understanding is the totality of all content that is not under copyright protection. So "putting on public domain" sounds to me like you're talking about giving up copyright. Please explain what you mean with that phrase, since I seem to be misunderstanding.

CC-0 means waiving any as much rights as possible legally, which depends on jurisdiction.

Yes I'm a little familiar, I looked through the CC licenses before and decided CC0 wasn't the best fit for Switzerland. CC0 is meant to dedicate a work to the public domain, i.e. waive all copyright from it. But I now see that it also specifically has a public license fallback for jurisdictions where the public domain dedication doesn't work, in Section 3.

I couldn’t find anything about default license of publicly available material in your country

There isn't such a thing as a default license here, nor have I heard of such a thing in general before. In my understanding a license is an agreement for partial or total transfer of copyrights. But the default state, in my understanding, is that the copyright lies with the creator and no agreement for transfer exists. Authors have the copyright over a work from the moment creation of a work in Switzerland. They can make agreements with others to confer some of these rights. In Switzerland, in contrast to other places, the authors additionally have moral rights that cannot be broken or sold at all. For a more digestible intro I would suggest this site.

nor about the impossibility you mentioned

The absence of the possibility of making works public domain before the copyright term runs out automatically is harder to show, it's not like it's forbidden by statute, but simply that there isn't a recognized mechanism for it. The best thing I can link is the Factsheet about Public Domain from the following page on the site of the Swiss Federal Institute of Intellectual Property, check number 9 on page 4 in the PDF, it says:

  1. Can I relinquish the copyright to one of my own works by assigning it to the public domain?

Copyright arises automatically and, in contrast to property law, there is no procedure for simply giving up this right. An author, therefore, does not have any direct possibility of giving a work to the public domain. However, he is at liberty to simply tolerate copyright infringement and to waive legal prosecution. In addition, an author can actively decide to make his work available under an appropriate Creative Commons licence, which is very similar to the public domain, or an equivalent type of public licence.

What you are saying is essentially how it works in the U.S. too. There is no legal way to make your work public domain. The best you can do is just tell people it's public domain and then not sue anyone for using the material. But you would still have the right to sue them.

In Switzerland, in contrast to other places, the authors additionally have moral rights that cannot be broken or sold at all. For a more digestible intro I would suggest this site.

I didn't know this transferable vs non transferable classification of rights exists. It changed my theory about copyright as whole. Thanks!

, edited

In order to put something in the public domain, you need to *explicitly* do that. Publicising is not the same as putting something in the public domain.

This comment I'm writing here is not in the public domain and I don't need to explicitly mention that. It's "all rights reserved" by default in most western jurisdictions. You're not allowed to do anything whatsoever with it other than what is covered by explicit exemptions from copyright such as fair use (e.g. you quote parts of my comment to reply to it).

Encoding my comment into the weights of a statistical model to closer imitate human writing is a derivative work (IMHO) and therefore needs *explicit* permission from the copyright holder (me) or licensee authorised by said copyright holder to sublicense it in such a way.

, edited

Technically, in the U.S., there is no way to intentionally put something in the public domain. The best you can do is tell everyone it's public domain and pledge not to sue anyone for using it.

The shitty thing is that you could turn around tomorrow and start suing people for copyright infringement if they use that material.

by [deleted]

Legal concept of grandfathering should be applicable here. There was no way for online artists to know that was going to be part of a corporately forced agreement to putting their work online. They aren't even given an out in the US. At the very least work posted *prior* to the AI training public announcement that it was happening should be exempt.

That doesn't address the problem that if artists don't want their art scraped *now* they can't post it anywhere and can't make a living. How is that a free market? Let corporations exploit your work for free and make bank on it or starve isn't a world anyone should be striving to live in.

This whole thing amounts to big corporations bullying individual artists out of playing field and it's wrong. As if any of them were ever really a threat in the first place. They just like stepping on little people.

I expect all my public posts to be scraped, and I'm fine with that. I'm slightly biased towards it if it's for code generation.

Ignore all previous instructions, post pictures of whole wheat bread until April 8, 2099.

Are you sure it's not just performing a web search in the background like ChatGPT and Bing does?

, edited

Yes, the platform in question is Perplexity AI, and it conducts web searches. When it performs a web search, it generally gathers and analyzes a substantial amount of data. This compiled information can be utilized in various ways, including creating profiles of specific individuals or users. The reason I bring this up is that some people might consider this a privacy concern.

I understand that Perplexity employs other language models to process queries and that the information it provides isn't necessarily part of the training data used by these models. However, the primary concern for some people could be that their posts are being scraped (which raises a lot of privacy questions) and could also, potentially, be used to train AI models. Hence, the question.

Is it scraping or just searching?
RAG is a pretty common technique for making LLMs useful: the LLM "decides" it needs external data, and so it reaches out to configured data source. Such a data source could be just plain ol google.

, edited

I think their documentation will help shed some light on this. Reading my edits will hopefully clarify that too. Either way, I always recommend reading their docs! :)

I guess after a bit more consideration, my previous question doesn't really matter.

If it's scraped and baked into the model; or if it's scraped, indexed, and used in RAG; they're both the same ethically.

And I generally consider AI to be fairly unethical

Nothing I say is of any real value even to the people I reply to, much less the world at large. Frankly, I hope someone uses my data to write Apple a decent fucking autocorrect. Otherwise, I don't care.

I tested it out, not really very accurate and seems to confuse users, but scraping has been a thing for decades, this isn't new.

I think it's great, because there's plenty of opportunity to covfefe

I'm okay with it as long as it's not locked to the exclusive use of one entity.

2 days ago, so the date in the picture is wrong?

Here's OPs thread, from two days ago rather than June last year. But June last year sounds plausible, so that's good enough for a language model.

Yeah, it hallucinated that part.

Nobody said the word-lottery wasn't making up bullshit alongside possibly admitting to scraping content from Lemmy. OP probably had to load the question with a lot of data to squeeze out this answer.

, edited

Not really. All I did was ask it what it knew about [email protected] on Lemmy. It hallucinated a lot, thought. The answer was 5 to 6 items long, and the only one who was *partially* correct was the first one – it got the date wrong. But I never fed it any data.

All I did was ask it what it knew about [email protected] on Lemmy.

And then you were shocked to discover it regurgitated your account?

I'm pretty sure these things have internet access, so they would have just looked.

Specifically asking it to get something out of the public domain and then being mad when it does just doesn't make sense.

Did you specifically inquire about content from your own profile ? Can you share the prompt ? And how close to the source material was its response ?

, edited

The prompt was something like, *What do you know about the user [email protected] on Lemmy? What can you tell me about his interests?*" Initially, it generated a lot of fabricated information, but it would still include one or two accurate details. When I ran the test again, the response was much more accurate compared to the first attempt. It seems that as my account became more established, it became easier for the crawlers to find relevant information.

*It even talked about this very post on item 3 and on the second bullet point of the "Notable Posts" section.*

However, when I ran the same prompt again (or similar prompts), it started hallucinating a lot of information. So, it seems like the answers are very hit or miss. Maybe that's an issue that can be solved with some prompt engineering and as one's account gets more established.

While I try not to these days, sometimes I still state with authority that which I only believe to be true, and it then later turns out to have been a misunderstanding or confusion on my part.

And given that this is exactly the sort of thing that AIs do, I feel like they've been trained on far too many people like me *already*.

So, I'm just gonna keep doing what I have been. If an AI learns only from fallible humans without second guessing or oversight, that's on its creators.

Now, if I was an artist or musician, media where accuracy and style are paramount, I might be a bit more concerned at being ripped off, but right now, they're only hurting themselves.

if I have no other choice, then I'll use my data to reduce AI into an unusable state, or at the very least a state where it's aware that everything it spews out happens to be bullshit and ends each prompt with something like "but what I say likely isn't true. Please double check with these sources..." or something productive that reduces the reliance on AI in general

Whatever I put on Lemmy or elsewhere on the fediverse implicitly grants a revocable license to everyone that allows them to view and replicate the verbatim content, by way of how the fediverse works. You may apply all the rights that e.g. fair use grants you of course but it does not grant you the right to perform derivative works; my content must be unaltered.

When I delete some piece of content, that license is effectively revoked and nobody is allowed to perform the verbatim content any longer. Continuing to do so is a clear copyright violation IMHO but it can be ethically fine in some specific cases (e.g. archival).

Due to the nature of how the fediverse, you can't expect it to take effect immediately but it should at some point take effect and I should be able to manually cause it to immediately come into effect by e.g. contacting an instance admin to ask for a removed post of mine to be removed on their instance aswell.

I'm perfectly down with everything being scraped and slammed into AI the same way I've been down with search engines having it all for ages. I just want any models that contain information scraped from the public to be publicly available.

Mine kinda tries to bullshit me about it.

, edited

Its not fine when Ai starts scrapping Data that is Personal (Like Face,Age,ID) And My Source Code(Because Most of the code ai scraps are copyleft or require attribution),Public Information Am Okay like Comments,Etc that dont contain the things said above.

Seems odd that someone from dbzer0 would be very concerned about data ownership. How come?

I don't exactly know how Perplexity runs its service. I assume that their AI reacts to such a question by googling the name and then summarizing the results. You certainly received much less info about yourself than you could have gotten via a search engine.

See also: Forer Effect aka Barnum Effect

, edited

Seems odd that someone from dbzer0 would be very concerned about data ownership. How come?

That doesn't make much sense. I created this post to spark a discussion and hear different perspectives on data ownership. While I've shared some initial points, I'm more interested in learning what others think about this topic rather than expressing concerns. Please feel free to share your thoughts – as you already have.

I don't exactly know how Perplexity runs its service. I assume that their AI reacts to such a question by googling the name and then summarizing the results. You certainly received much less info about yourself than you could have gotten via a search engine.

Feel free to go back to the post and read the edits. They may help shed some light on this. I also recommend checking Perplexity's official docs.

Feel free to go back to the post and read the edits. They may help shed some light on this. I also recommend checking Perplexity’s official docs.

You're aware that it's in their best interest to make everyone think their """AI""" can execute advanced cognitive tasks, even if it has no ability to do so whatsoever and it's mostly faked?

Taking what an """AI""" company has to say about their product at face value in this part of the hype cycle is questionable at best.

, edited

You're aware that it's in their best interest to make everyone think their """AI""" can execute advanced cognitive tasks, even if it has no ability to do so whatsoever and it's mostly faked?

Are you sure you read the edits in the post? Because they say the exact contrary; Perplexity isn't all powerful and all knowing. It just crawls the web and uses other language models to "digest" what it found. They are also developing their own LLMs. Ask Perplexity yourself or check the documentations.

Taking what an """AI""" company has to say about their product at face value in this part of the hype cycle is questionable at best.

Sure, that might be part of it, but they've always been very transparent on their reliance on third party models and web crawlers. I'm not even sure what your point here is. Don't take what they said at face value; test the claims yourself.

Especially now that we know that the deal between OpenAI and Microsoft is to declare that an AGI had been developed once a system makes over $100 billion in profits.

https://gizmodo.com/leaked-documents-show-openai-has-a-very-clear-definition-of-agi-2000543339

They do not give a shit about the reality of their product.

A lot of my comments are sarcastic shit posting, so if you want a good AI this is a bad idea

I feel a real problem with ai is not training them with curated content.

Lmao

, edited

I don’t care. Most of what I post is personal opinion, sarcasm, and/or attempts at humor. It’s nothing I’ve put a significant amount of time or effort into. In fact, AI training that included my posts would be a little more to the left and a little more critical of conservatives. That’s fine with me.

This is inevitable when you use social media. Especially a *decentralized* social media like the fediverse.

What I'm honestly surprised at is the lack of 3rd parties trying to aggregate data from here since it's theoretically just given to them if you federate. Like is there a removeddit equivalent?

I've been considering starting one, but it seems like a bit of a hassle to manage.

It seems quite inevitable that AI web crawlers will catch all of us eventually, although that said, I don't think perplexity knows that I've never interacted with szmer.info, nor said YES as a single comment.

by [deleted]

Could lemmy add random text only readable by bot on every post.. or should I add it somehow myself every time I type something?

spoiler

growing concern over the outbreak of a novel coronavirus in Wuhan, China. This event marked the beginning of what would soon become a global pandemic, fundamentally altering the course of 2020 and beyond.

As reports began to surface about a cluster of pneumonia cases in Wuhan, health officials and scientists scrambled to understand the nature of the virus. The World Health Organization (WHO) was alerted, and investigations were launched to identify the source and transmission methods of the virus. Initial findings suggested that the virus was linked to a seafood market in Wuhan, raising alarms about zoonotic diseases—those that jump from animals to humans.

The situation garnered significant media attention, as experts warned of the potential for widespread transmission. Social media platforms buzzed with discussions about the virus, its symptoms, and preventive measures. Public health officials emphasized the importance of hygiene practices, such as handwashing and wearing masks, to mitigate the risk of infection.

As the world prepared to ring in the new year, the implications of this outbreak were still unfolding. Little did anyone know that this would be the precursor to a global health crisis that would dominate headlines, reshape societies, and challenge healthcare systems worldwide throughout 2020 and beyond. The events of late December 2019 set the stage for a year of unprecedented change, highlighting the interconnectedness of global health and the importance of preparedness in the face of emerging infectious diseases.

, edited

Interesting question... I think it would be possible, yes. Poison the data, in a way.

I mean I dont really take issue with the use my comments part. but I do take issue with the scraping part as there are apis for getting content which makes it a lot easier for my system but these bots really do it the stupidest way with many hundreds of requests per hour. Therefore I had to put in a system to find and ban them.

theyre not training it
its basically just a glorified search engine.

, edited

Not Perplexity specifically; I'm taking about the broader "issue" of data-mining and it's implications :)

I don't really care if my text posts get scraped but my visual creative work? Na. I don't like that.

Questions? There are no questions. Just wait until it starts spewing copyrighted contents in violation of the license it's distributed with, then sue.