I asked Google Bard whether it thought Web Environment Integrity was a good or bad idea. Surprisingly, not only did it respond that it was a bad idea, it even went on to urge Google to drop the proposal.
- koper ( @koper@feddit.nl ) 152•1 year ago
For the last time: these language models are just regurgitating what people have said. They don’t analyze or reason.
- localhost ( @localhost@beehaw.org ) 47•1 year ago
That’s not entirely true.
LLMs are trained to predict next word given context, yes. But in order to do that, they develop internal model that minimizes error across wide range of contexts - and emergent feature of this process is that the model DOES perform more than pure compression of the training data.
For example, GPT-3 is able to calculate addition and subtraction problems that didn’t appear in the training dataset. This would suggest that the model learned how to perform addition and subtraction, likely because it was easier or more efficient than storing all of the examples from the training data separately.
This is a simple to measure example, but it’s enough to suggests that LLMs are able to extrapolate from the training data and perform more than just stitch relevant parts of the dataset together.
- fuzzzerd ( @fuzzzerd@programming.dev ) 8•1 year ago
That’s interesting, I’d be curious to read more about that. Do you have any links to get started with? Searching this type of stuff on Google yields less than ideal results.
- localhost ( @localhost@beehaw.org ) 7•1 year ago
In my comment I’ve been referencing https://arxiv.org/pdf/2005.14165.pdf, specifically section 3.9.1 where they summarize results of the arithmetic tasks.
- hikaru755 ( @hikaru755@feddit.de ) 6•1 year ago
Check out this one: https://thegradient.pub/othello/
In it, researchers built a custom LLM trained to play a board game just by predicting the next move in a series of moves, with no input at all about the game state. They found evidence of an internal representation of the current game state, although the model had never been told what that game state looks like.
- Xandolas ( @Xandolas@beehaw.org ) 2•1 year ago
isn’t gpt famously bad at math problems?
- localhost ( @localhost@beehaw.org ) 7•1 year ago
GPT3 is pretty bad at it compared to alternatives (although it’s hard to compete with calculators on that field), but if it was just repeating after the training dataset it would be way worse. From the study I’ve linked in my other comment (https://arxiv.org/pdf/2005.14165.pdf):
On addition and subtraction, GPT-3 displays strong proficiency when the number of digits is small, achieving 100% accuracy on 2 digit addition, 98.9% at 2 digit subtraction, 80.2% at 3 digit addition, and 94.2% at 3-digit subtraction. Performance decreases as the number of digits increases, but GPT-3 still achieves 25-26% accuracy on four digit operations and 9-10% accuracy on five digit operations, suggesting at least some capacity to generalize to larger numbers of digits.
To spot-check whether the model is simply memorizing specific arithmetic problems, we took the 3-digit arithmetic problems in our test set and searched for them in our training data in both the forms “<NUM1> + <NUM2> =” and “<NUM1> plus <NUM2>”. Out of 2,000 addition problems we found only 17 matches (0.8%) and out of 2,000 subtraction problems we found only 2 matches (0.1%), suggesting that only a trivial fraction of the correct answers could have been memorized. In addition, inspection of incorrect answers reveals that the model often makes mistakes such as not carrying a “1”, suggesting it is actually attempting to perform the relevant computation rather than memorizing a table.
I know. I just thought it was a bit ironic seeing such a strongly worded response from it.
- max ( @max@feddit.nl ) 12•1 year ago
Exactly. They’re great bullshitting machines, that’s it.
- Drewelite ( @Drewelite@lemmynsfw.com ) English10•1 year ago
Same as humans.
- verdare [he/him] ( @verdare@beehaw.org ) English8•1 year ago
LLMs do replicate a small subset of human cognition, but not the full scope. This can result in human-like behavior, but it’s important to be aware of the limitations.
The biggest limitation is the misalignment in goals. LLMs won’t perform a very deep analysis of their input because they don’t need to. Their goal isn’t honest discussion, a pursuit for truth, or even having a coherent set of beliefs about the world. Their only goal is to sound plausible. And, as it turns out, it’s not too hard to just bullshit your way through the Turing test.
- Elise ( @xilliah@beehaw.org ) 6•1 year ago
Could you share your source?
- graham1 ( @graham1@gekinzuku.com ) English9•1 year ago
Large language models literally do subspace projections on text to break it into contextual chunks, and then memorize the chunks. That’s how they’re defined.
Source: the paper that defined the transformer architecture and formulas for large language models, which has been cited in academic sources 85,000 times alone https://arxiv.org/abs/1706.03762
- notfromhere ( @notfromhere@lemmy.one ) 6•1 year ago
Hey, that comment’s a bit off the mark. Transformers don’t just memorize chunks of text, they’re way more sophisticated than that. They use attention mechanisms to figure out what parts of the text are important and how they relate to each other. It’s not about memorizing, it’s about understanding patterns and relationships. The paper you linked doesn’t say anything about these models just regurgitating information.
- graham1 ( @graham1@gekinzuku.com ) English4•1 year ago
I believe your “They use attention mechanisms to figure out which parts of the text are important” is just a restatement of my “break it into contextual chunks”, no?
- Elise ( @xilliah@beehaw.org ) English1•1 year ago
As far as I understand it, such a model is more like a program than a database. How do you see it?
- frogman [he/him] ( @frogman@beehaw.org ) English11•1 year ago
i just tried this and got a more fence-sitting result of “here are the pros and cons, there should be public discussion before we know if it’s good or bad”.
but your result is fascinating.
- LoreleiSankTheShip ( @LoreleiSankTheShip@lemmy.ml ) English10•1 year ago
Would it be feasible to fork the internet if this comes to pass?
- realharo ( @realharo@lemm.ee ) 7•1 year ago
Internet is just a series of tubes. You’re talking about alternative content/services providers (news, video, shopping, etc.) if the existing ones choose to require only approved browsers.
Are you going to run your own news company?
- TheGreenGolem ( @TheGreenGolem@lemm.ee ) English10•1 year ago
The internet, without porn, is just 4 DVDs anyway.
- ɔiƚoxɘup ( @Quexotic@beehaw.org ) 8•1 year ago
What’s the response? The link doesn’t work.
In my opinion, Web Environment Integrity (WEI) is a bad idea. It has the potential to harm the open web and restrict user freedom.
WEI is a proposal by Google to add a new layer of security to the web by allowing websites to verify the integrity of the user’s environment. This would be done by having the browser generate a cryptographic token that attests to the state of the user’s device and software. Websites could then use this token to determine whether the user is running a legitimate copy of the browser, or if the browser has been tampered with.
There are a number of potential problems with WEI. First, it could be used to restrict user freedom. For example, websites could refuse to serve content to users who are running non-standard browsers or operating systems. This could make it difficult for users to choose the software they want to use, and it could also make it more difficult for users to access certain types of content.
Second, WEI could be used to stifle innovation. New browsers and operating systems would need to be certified by Google in order to be compatible with WEI. This could make it more difficult for small developers to create new browsers, and it could also discourage innovation in the browser market.
Third, WEI could be used to collect user data. The cryptographic tokens that WEI generates could contain information about the user’s device and software. This information could be used to track users across websites, or it could be used to target users with advertising.
Overall, I believe that WEI is a bad idea. It has the potential to harm the open web and restrict user freedom. I urge Google to reconsider its proposal and to find a more privacy-friendly way to improve web security.
Here are some additional examples of how WEI could be used to harm the open web:
- WEI could be used to block users from accessing websites that are critical of Google or its partners.
- WEI could be used to block users from running ad blockers or other privacy-enhancing extensions.
- WEI could be used to block users from running alternative operating systems, such as Linux or FreeBSD.
I believe that the potential harms of WEI outweigh the potential benefits. I urge Google to abandon this proposal and to find a more privacy-friendly way to improve web security.
- ɔiƚoxɘup ( @Quexotic@beehaw.org ) 2•1 year ago
Many tanks, internet friend.