Sunday, December 15, 2024

Social Media Platforms Are Defective By Design. Recall And Quarantine Them.

While the social-mediaphobes are largely wrong about their latest moral panic in regards to young people specifically, there is still a very vexing kernel of truth to what they say, albeit for all ages.  That is, to say the quiet part out loud, social media platforms are defective by design.  If they were only designed to be deliberately addictive with features that are engineered to keep users "engaged" (or in Vegas lingo, increase "time on machine"), which they of course are doing as well, that alone would be bad enough.  But it gets worse than that.  These platforms are designed to amplify the very worst of humanity, as far and wide as possible, so the companies that run them can literally profit off of the world's misery.  And these soulless corporations could literally care less about who gets hurt in the process, so long as their bottom lines increase.  And it's only getting worse, not better.

(And so-called "dating apps" are basically "social media on crack", by the way.  So everything in this article that we say about social media shall apply a fortiori to these algorithmically driven apps as well.)

Thus, the TSAP currently believes that emergency executive action needs to be taken by the President of the United States, yesterday.  (Of course, we know that no President actually will.)  That is, declare these products to be defective by design, recall, and "quarantine" them (for all ages) for two weeks or until they can be made safer, whichever is longer.  That is, freeze the platforms completely and sign everyone out automatically.  Exceptions should of course be made for standalone direct messaging apps like FB Messenger or WhatsApp (which are used frequently for international business with the Global South as well as as the Global North), provided that group chats are limited to no more than 10 people (chats larger than that would get frozen too).

This is of course temporary, so while it will provide a much needed "digital detox" for millions of people, it will not actually solve the collective action problem of Big Tech and the "Social Dilemma".  But after that, here are some things that actually will, in descending order of priority and effectiveness:

  1. First and foremost, take a "Privacy First" approach as recommended by the Electronic Frontier Foundation (EFF).  Pass comprehensive data privacy legislation for all ages that, at a minimum, would ban surveillance advertising, and ban data brokers too.
  2. Audit the algorithms and internal research of the Big Tech giants, and make the results publicly available for all to see.  Sunlight is truly the best disinfectant. 
  3. Require the strictest and safest privacy settings to be the default settings for all users of all ages, which can then be adjusted more liberally by the users themselves.  For example, "friends only" sharing and "no DMs enabled from people whom one does not follow" by default.  And allow the option to turn off all DMs completely as well.
  4. Require or incentivize the use of various "architectural" safety features on all social media, such as various nudges, #OneClickSafer ("stop at two hops") to reduce the pitfalls of frictionless sharing, and increase the use of CAPTCHAs and similar tools to root out the pervasive toxic bots.
  5. If after doing that, We the People feel that we must still get stricter in terms of age, then don't make things any stricter than current California standards (i.e. CCPA and CAADCA).  That is, a "Kids Code" would be fine as long as it is properly written and doesn't result in censorship or mandatory age verification. 

The first two items on the list in particular would of course be vehemently opposed by Big Tech.  That's because their whole business model depends on creepy surveillance advertising and creepy algorithms, and thus incentivizing addiction for profit.  They would thus have to switch to the (gasp!) DuckDuckGo model if these items were done.  (Plays world's smallest violin) That would of course be tantamount to throwing the One Ring into the fires of Mount Doom, in J.R.R Tolkien's Lord of the Rings.

Other good ideas we would endorse are a voluntary smartphone buyback program (similar to gun buybacks), and perhaps even paying people to voluntarily delete or deactivate their social media accounts for a time. That would accomplish far more than any realistic mandatory measures would.

Another possible idea is simply to slow down by design the pace of these social media platforms.  Much like #OneClickSafer mentioned above, adding a little bit of friction to an otherwise frictionless system can help tame the very real dark side of that system.  I mean, would you willingly drive on a frictionless surface (such as ice)?  Of course you wouldn't.

Note that internet connection speeds are more than ten times faster (!) today on average than in 2010.  That leaves a LOT of room for adding back friction!

And finally, the idea of banning certain questionable design features (infinite scroll, autoplay, etc.) may be controversial in terms of whether such features are protected by the First Amendment, but we believe that those features per se are not automatically protected, unless the ban is deliberately abused to censor specific content.  If such bans are truly content-neutal, we are fine with that. 

We must remember that, at the end of the day, Big Tech is NOT our friend.  But neither are the illiberal control freak zealots.  These measures that we endorse will actually make both sides quite angry indeed.  And if nothing else, it will certainly help Americans of all ages finally snap out of the collective trance we have (more or less) all been under since the "Like Button Apocalypse" launched in 2009, and social media went fully mainstream shortly thereafter. 

So what are we waiting for?

Friday, December 13, 2024

How To Solve The Big Tech Problem Withiout Violating Anyone's Rights (Updated Re-Post)

"Big Tech is the new Big Tobacco" is often bandied about these days.  And while that has a kernel of truth to it (a kernel the size of a cornfield, in fact), it is also used by authoritarian zealots with a very illiberal (and ageist) agenda.  Mandatory age verification, censorship, repealing Section 230, and other related illiberal restrictions would open up the door to many unintended consequences to privacy, cybersecurity, and civil rights and liberties in general.  Even those adults who don't support youth rights will eventually experience these consequences sooner or later.  Kafka, meet trap.  Pandora, meet box.  Albatross, meet neck.  And of course, baby, meet bathwater. 

And none of these things will actually solve the collective action problem of Big Tech and the "Social Dilemma".  But here are some things that will, in descending order of priority and effectiveness:

  1. First and foremost, take a "Privacy First" approach as recommended by the Electronic Frontier Foundation (EFF).  Pass comprehensive data privacy legislation for all ages that, at a minimum, would ban surveillance advertising, and ban data brokers too.
  2. Audit the algorithms and internal research of the Big Tech giants, and make the results publicly available for all to see.  Sunlight is truly the best disinfectant. 
  3. Require the strictest and safest privacy settings to be the default settings for all users of all ages, which can then be adjusted more liberally by the users themselves.  For example, "friends only" sharing and "no DMs enabled from people whom one does not follow" by default.  And allow the option to turn off all DMs completely as well.
  4. Require or incentivize the use of various "architectural" safety features on all social media, such as various nudges, #OneClickSafer ("stop at two hops") to reduce the pitfalls of frictionless sharing, and increase the use of CAPTCHAs and similar tools to root out the pervasive toxic bots.
  5. If after doing that, We the People feel that we must still get stricter in terms of age, then don't make things any stricter than current California standards (i.e. CCPA and CAADCA).  That is, a "Kids Code" would be fine as long as it is properly written and doesn't result in censorship or mandatory age verification. 

The first two items on the list in particular would of course be vehemently opposed by Big Tech.  That's because their whole business model depends on creepy surveillance advertising and creepy algorithms, and thus incentivizing addiction for profit.  They would thus have to switch to the (gasp!) DuckDuckGo model if these items were done.  (Plays world's smallest violin) That would of course be tantamount to throwing the One Ring into the fires of Mount Doom, in J.R.R Tolkien's Lord of the Rings.

For another, related collective action problem, what about the emerging idea of phone-free schools?  Fine, but to be fair, how about phone-free workplaces for all ages as well?  In both cases, it should ONLY apply while "on the clock", which for school would be best defined as being from the opening bell to the final bell of the day, as well as during any after-school detention time.  And of course, in both cases, there would have to be medical exemptions for students and employees who need such devices for real-time medical monitoring (glucose for diabetes, for example).  Surely productivity would increase so much as a result that we could easily shorten the standard workweek to 30-32 hours per week (8 hours for 4 days, or 6 hours for 5 days) with no loss in profits?  But that would make too much sense.

Other good ideas we would endorse are a voluntary smartphone buyback program (similar to gun buybacks), and perhaps even paying people to voluntarily delete or deactivate their social media accounts for a time. That would accomplish far more than any realistic mandatory measures would.

Another possible idea is simply to slow down by design the pace of these social media platforms.  Much like #OneClickSafer mentioned above, adding a little bit of friction to an otherwise frictionless system can help tame the very real dark side of that system.  I mean, would you willingly drive on a frictionless surface (such as ice)?  Of course you wouldn't.

Note that internet connection speeds are more than ten times faster (!) today on average than in 2010.  That leaves a LOT of room for adding back friction!

And finally, the idea of banning certain questionable design features (infinite scroll, autoplay, etc.) may be controversial in terms of whether such features are protected by the First Amendment, but we believe that those features per se are not automatically protected, unless the ban is deliberately abused to censor specific content.  If such bans are truly content-neutal, we are fine with that. 

We must remember that, at the end of the day, Big Tech is NOT our friend.  But neither are the illiberal control freak zealots.  These measures that we endorse will actually make both sides quite angry indeed.  But truly that's a feature, not a bug.

Big Tech can go EFF off!

UPDATE:  We have opposed KOSA until recently due to censorship concerns, and while those concerns have been somewhat alleviated with recent edits to the bill, we still cannot say we support it 100%.  But for now, we have dropped our opposition to the bill, if for no other reason than to forestall more restrictive bills (like Australia's new law) in the future, and thus the TSAP and Twenty-One Debunked is currently neutral on KOSA despite it still not being ideal.