Two of the biggest forces in two deeply intertwined tech ecosystems – large incumbents and startups – took a break from counting their money to plead together that the government stop thinking about regulations that could affect their financial interests, or as they like to do it. let’s call it, innovation.
“Our two companies may not agree on everything, but it’s not about our differences,” writes this group with very disparate perspectives and interests: a16z founding partners Marc Andreessen and Ben Horowitz , as well as Microsoft CEO Satya Nadella and President/General Counsel Brad. Black-smith. A truly intersectional assemblage, representing both big business and big capital.
But it’s the little guys they’re supposed to watch out for. That is to say all the companies which would have been affected by the latest attempt at regulatory excess: SB 1047.
Imagine being accused of inappropriate disclosure of an open model! a16z general partner Anjney Midha called it a “regressive tax” on startups and “blatant regulatory capture” by big tech companies that could, unlike Midha and his impoverished colleagues, allow the necessary attorneys to comply.
Except it was just misinformation promulgated by Andreessen Horowitz and the other financial interests who might actually have been affected as backers of billion dollar companies. In fact, small models and startups would have been only slightly affected because the proposed law specifically protected them.
It is strange that the very type of deliberate cutoff for “Little Tech” that Horowitz and Andreessen regularly advocate has been distorted and downplayed by the lobbying campaign they and others waged against SB 1047. (The architect of This bill (California State Senator Scott Wiener) spoke about all of this recently at Disrupt.)
This bill had its problems, but its opposition vastly overestimated the cost of compliance and failed to significantly support claims that it would cripple or burden startups.
This is part of the established playbook with which Big Tech – with which, despite their posture, Andreessen and Horowitz are closely aligned – operate at the state level, where they can win (as with SB 1047), while demanding federal solutions that they know will be effective. will never come, or will have no teeth due to partisan bickering and congressional ineptitude on technical issues.
This joint statement of “political expediency” makes up the final part of the piece: having torpedoed SB 1047, they can say they only did so in order to support federal policy. Never mind that we’re still waiting for the federal privacy law that tech companies have been pushing for for a decade while fighting state bills.
And what policies do they support? “A variety of responsible market-based approaches,” in other words: Hands off our money, Uncle Sam.
Regulations should have “a science-based and normative approach that recognizes regulatory frameworks focused on the application and misuse of technology” and should “focus on the risk of malicious actors misusing AI.” This means that we should not have proactive regulation, but rather reactive sanctions when unregulated products are used by criminals for criminal purposes. This approach worked really well for this whole FTX situation, so I can understand why they’re taking it.
“Regulation should only be implemented if its benefits exceed its costs. » It would take thousands of words to explain how hilarious this idea, expressed so, in this context, is. But basically what they are proposing is that the fox be included in the planning committee for the hen house.
Regulators should “give developers and startups the flexibility to choose which AI models to use wherever they create solutions and not tip the scales in favor of any particular platform.” The implication is that there is some sort of plan to require permission to use one model or another. Since it doesn’t, it’s a straw man.
Here’s a big one that I have to quote in full:
The right to learn: Copyright law is designed to promote the progress of science and useful arts by extending protection to publishers and authors to encourage them to present new works and knowledge to the public, but not to the detriment of the right of the public to learn from these works. Copyright law should not be co-opted to imply that machines should be prevented from using data – the foundation of AI – to learn in the same way humans do. Unprotected knowledge and facts, whether or not contained in protected subject matter, must remain free and accessible.
To be clear, the explicit claim here is that software, run by billion dollar corporations, has the “right” to access any data because it should be able to learn from it “in the same way that people “.
First of all, no. These systems are not like people; they produce data that mimics human production in their training data. These are complex statistical projection software with a natural language interface. They have no more “right” to a document or fact than Excel.
Second, this idea that “facts” – by which they mean “intellectual property” – are the only thing these systems care about and that some sort of fact-hoarding cabal is working to prevent them is an artificial narrative that we have already seen. Perplexity invoked the “facts belong to everyone” argument in its public response to its lawsuit for systematic theft of alleged content, and its CEO Aravind Srinivas repeated the mistake to me on stage at Disrupt, as if they were prosecuted for knowing trivialities like distance. from the Earth to the Moon.
While this is not the place to launch into a full exposition of this particular straw man argument, let me simply point out that even if the facts are indeed free agents, the way they are created – for example, through original reports and scientific research – involves real costs. This is why copyright and patent systems exist: not to prevent intellectual property from being shared and widely used, but to encourage its creation by ensuring that it can be assigned real value.
Copyright law is far from perfect and is probably as much abused as it is used. But it is not “co-opted to imply that machines should be prevented from using the data”; it is enforced to ensure that bad actors do not circumvent the value systems we have built around intellectual property.
This is clearly the ask: to allow the systems we own, manage and benefit from to freely use the valuable production of others, without compensation. To be fair, this part is “the same as humans”, because humans are the ones who design, run, and deploy these systems, and humans don’t want to pay for something they don’t have to pay for , and don’t do it. I don’t want regulations to change that.
There are many other recommendations in this small policy document, which are undoubtedly more detailed in the versions sent directly to lawmakers and regulators through official lobbying channels.
Some ideas are undoubtedly good, even if they’re also a bit selfish: “fund digital literacy programs that help people understand how to use AI tools to create and access information.” GOOD! Of course, the authors are heavily invested in these tools. Support “Open Data Commons – accessible pools of data that would be managed in the public interest. ” Great! “Examining its procurement practices to enable more startups to sell technology to the government.” Awesome!
But these more general, positive recommendations are the kinds of things we see from industry every year: investing in public resources and speeding up government processes. These acceptable but inconsequential suggestions are just a vehicle for the more important ones I’ve outlined above.
Ben Horowitz, Brad Smith, Marc Andreessen and Satya Nadella want the government to stop regulating this lucrative new development, let the industry decide which regulations are worth sacrificing, and roll back copyright in a way that acts more or less as a general rule. pardon for the illegal or unethical practices that many believe have enabled the rapid rise of AI. These are the policies that matter to them, whether or not children become digitally literate.