An excellent decision. If it had gone the other way we likely would have seen social media websites shutdown entirely and comments disabled from YouTube. This also would have directly affected anyone in the U.S. that wanted to run an instance of Lemmy (or any federated instance that users could post content on).

The rulings were in regards to Section 230 which was a law passed in 1996 aimed at protecting services which allow users to post their own content.

The supreme court tackled 2 different cases concerning this:

  1. Whether social media platforms can be held liable for what their users have said.
  2. This was very specific to whether algorithms that refer tailored content to individual users can cause companies to be considered as knowingly aiding and abetting terrorists (if their pro-terrorist content is referred to other users).
  •  jmp242   ( @jmp242@sopuli.xyz ) 
    link
    fedilink
    English
    21 year ago

    My problem with it is in #2. Should the NYT not have any liability if they print a letter to the editor that an AI or ‘algorithm’ picked? If not, why is it different than the above? This ‘get out of responsibility free’ card for pushing content is wrong. Otherwise the famous nipple slip should never have come down on the broadcaster cause they didn’t create it, it was a ‘user content’. This “it’s not me, it was my machine” is such a stupid dodge imho. If you built the machine, and or you’re running the machine, you have some responsibility for what that machine does.

    • That’s fine, but let’s dig into it a bit more.

      Where do you draw the line for what’s considered “terrorist content” vs what is just promoting things that terrorists also promote.

      And how do you implement a way to fix the algorithm so that absolutely nothing that crosses that line?

      Just look at how well email filters work against junk mail and scams.

      Now let’s apply this to Lemmy and federated instances. If you personally are hosting an instance, of course you’re going to do your best to keep it free from content like that. Let’s say you’re running some open source code that has an algorithm for highlighting posts that align with the user’s previously liked content.
      If someone posts something that crosses the line and it gets around your filters and somehow gets highlighted to other users before you can remove it, you are suggesting that the person in charge of that instance should be directly held responsible and criminally charged for aiding and abetting terrorism.

      •  jmp242   ( @jmp242@sopuli.xyz ) 
        link
        fedilink
        English
        21 year ago

        I’m not commenting on what’s “terrorist content”, because that’s not my complaint. Maybe that was critical to the Supreme Court case, but it wasn’t presented in the news that way (that I saw). It was all about how “we can’t be liable for what our algorithm promotes”. And this is a traditional “on a computer” sort of defense that if we take the computer out of it, but do the same thing, we’d find people liable or criminally responsible. “On a Computer” should not be a “get out of responsibility free” card IMO.

        Look, if I, as a person, would be in trouble for feeding the content to a person, then just because I write a program and claim it’s “magic” that does the same thing, I don’t think you should get out of it. And frankly, I’m tired of Big Tech getting a pass on problems they have / create “because it’s too hard” to be responsible where no one else gets the “too hard” defense… Maybe you don’t do the thing till you can do it responsibly.

        I also think there needs to be some nuance - I’m stuck on the promoting part - the push aspect. If you as a user go search it out, or if it’s comments / replies in chronological order or threaded, current rules on reporting and takedown seem fine. I just think there is a difference between a bulletin board at a local town hall, and the mailed out town hall newsletter with promoted bulletin board entries.

        If your system is choosing what to show to an end user without that end users direct input, based on some decision process that is weighting things, especially automatically (i.e. clickless play next etc) then you have the same responsibility as CNN would for showing that video on their channel.

        • Ah, I see what you’re getting at.

          Maybe that was critical to the Supreme Court case, but it wasn’t presented in the news that way (that I saw)

          Yeah that’s the problem with a lot of news organizations. They like to spin stories to support whatever agenda/narrative they want to push rather than what the case was actually about.

          I would suggest this video by Steve Lehto: https://youtu.be/2EzX_RdpJlY
          He’s a lawyer who mostly comments on legal issues that end up in the news and his insight is invaluable. He talks about these 2 cases specifically in this video.

          #2 was very specific towards whether you would be considered as aiding and abetting terrorists in a terror attack if the algorithm pushed it to others.

          It sounds like there are a ton of other cases that have been submitted to the supreme court, so I’m sure there’s one that may address your concerns.

          And frankly, I’m tired of Big Tech getting a pass on problems they have / create “because it’s too hard” to be responsible where no one else gets the “too hard” defense.

          I get your frustration, I mean I’m assuming that most everyone that’s here is here because we’re fed up with what they’ve done with social media.
          But in this case a loss for big tech would have had even worse repercussions for smaller platforms like Lemmy.

          •  jmp242   ( @jmp242@sopuli.xyz ) 
            link
            fedilink
            English
            11 year ago

            Ah sure. I’m not at all sure what should be considered aiding a terrorist organization, but I think if you took out algorithm and put in employee and it would be that, then slotting in algorithm should not be a defense. That’s my main point.

            •  SolarSailer   ( @SolarSailer@beehaw.org ) OP
              link
              fedilink
              English
              1
              edit-2
              1 year ago

              I was curious as well as to the definition. So I looked up the published opinion. You can find it on the official website: https://www.supremecourt.gov/opinions/slipopinion/22 Look for “Twitter, Inc. v. Taamneh” Or a direct link to it is here: https://www.supremecourt.gov/opinions/22pdf/21-1496_d18f.pdf

              Basically it looks like most of the case was working around figuring out the definition of “Aiding and Abetting” and how it applied to Facebook, Twitter, and Google. It’s worth reading, or at least skipping to the end where they summarize it.

              When they analyzed the algorithm they found that:

              As presented here, the algorithms appear agnostic as to the nature of the content, matching any content (including ISIS’ content) with any user who is more likely to view that content. The fact that these algorithms matched some ISIS content with some users thus does not convert defendants’ passive assistance into active abetting.

              The only way I could see them liable for the algorithm is if any big tech company had tweaked the algorithm so that it specifically recommended the terrorist content more than it should have.

              The code doesn’t have a concept of what is right or what is wrong, it doesn’t even understand what the content is that it’s recommending. It just sees that users watching this video also typically watch that other video and so it recommends that.

              if you took out algorithm and put in employee and it would be that, then slotting in algorithm should not be a defense.

              Alright let me try a hypothetical here. Let’s say I hosted a public billboard in a town square and I used some open source code to program a robot to automatically pick up fliers from a bin that anyone could submit fliers to. People can tag the top part of their flier with a specific color. The robot has an algorithm that reads the color and then puts up the fliers on a certain day of the week corresponding with that color.
              If someone slipped some terrorist propoganda into the bin, who is at fault for the robots actions?

              Should the developer who published the open source code be held liable for the robots actions?

              Should the the person that hosts the billboard be liable for the robots actions?

              Edit: fixed a grammatical error and added suggestion to read summary.

              •  jmp242   ( @jmp242@sopuli.xyz ) 
                link
                fedilink
                English
                11 year ago

                I may check out more of the reading. I think it’s tricky in your example. I have doubts about how unbiased any algorithm is likely to be, we already have documented cases of algorithms being biased in ways that get people in trouble. So we can’t treat an algorithm as inherently neutral. Given people and increasing complexity of algorithms I’m unsure it’s philosophically possible to make a neutral algorithm, certainly not one that works like youtu.be does.

                So, to the extent the algorithm preferred content you could be liable for, the developer could be liable. Except for most open source (and closed source) licenes disclaim liability and put it on the entity running the software. I think we ought to look at how this was hashed out with physical machines and take many cues from that to where the liabilities could fall. Some regulated fitness for purpose for sales anyway might actually help society. IMO.

                In terms of liability I feel like the locus has to be who is running the robot. Who owns it? Again, the robot is basically an employee.

                The algorithm is of course one of those things like obscenity - hard to define, but our legal system muddles through with that. If I’m writing the law simple posting via date, however implemented would not count as me being a publisher. Anything you claim a trade secret in or sell as part of your products that chooses what to show is a judgment that makes you a publisher. Basically think about if you had a person do it - can you give them a short set of rules to do it such that it takes no experience or training? Then it’s not judgement. If it takes pages of a decision tree, or a gut feeling after lots of experience to pick what goes where, then it is a judgment, and publisher rules should apply. I think sorting based on a tag applied by someone outside your org should not trigger you being a publisher because your org isn’t picking what to show - you’re showing everything. And the things that make this especially true for YouTube is that it automatically chooses what to show you next, and also gives a shortlist of what you should watch next. I think it could avoid being a publisher if it just stopped after a video, and you had to click back to the listing page to pick another or do a new search. Like the Google search works. I just think YouTube is way more like cable than a local billboard with different ads oer day.

                But all of this is in my opinion predicated on there being any liability in what you’re publishing. Because I’m not asking for a new liability, just the same publishers already have. If Fox News is liable to Dominion I don’t think that should change if it had been “robots” peddling the lies. Even if it was all automated online.

                And this I think is actually what kills these cases - it’s not at all clear to me that you should be liable for anything you show people. But if we take this up we should be consistent with other publishers imo.

            • And I think that lines up with the actual decision. If an employee is tasked with following certain policies to keep terrorist content out of their business, those policies are reasonable, the employee fully follows them, and terrorist content finds its way in anyway, the employee should not be held responsible for it.

              And an algorithm is just a really complex company policy that is run by humans in cooperation with a machine.

              Similarly, if a school has a robust school shooter policy in place, all their staff are trained and following policy, and someone shoots up the school anyway, nobody has been aiding the shooter; they just weren’t good enough in stopping them.

              The challenge with promotion algorithms is that they are often more complex than a human can fully understand, so other algorithms are used to help check them. If there’s an error in the programming of either piece of software or in the test data or assumptions about the test criteria, it will break down. And it’s really difficult to tell/prove whether such a breakdown is accidental or if someone intentionally added a bug.

              Intent matters when it comes to many parts of the law.