Surprised by the Danger of Tesla’s Cybertruck? So is the Federal Government:A Crash Course on U.S. Automotive Regulation
In November 2019, Tesla Inc. (“Tesla”) unveiled the prototype of its latest innovation in the electric vehicle market—the Cybertruck. The brainchild of Tesla CEO and former “Richest Man in the World” Elon Musk, the Cybertruck, was met with polarizing reactions from critics and fans alike. Some felt the Cybertruck’s polygonal design and purported electric vehicle (“EV”) capabilities, such as a 500-mile range on a single charge, deserved praise. Some felt skepticism over the practicality of the design and Musk’s ability to deliver on any of his promises. Many industry experts questioned the safety capabilities of a sharp, stainless steel, 14,000+ pound truck with an acceleration speed of 0-60 MPH in 2.9 seconds.
The Quest for DABUS Continues: U.K Most Recent Country to Deny DABUS as an Inventor
In 2018, Dr. Stephen Thaler, a researcher specializing in artificial intelligence (AI), filed patent applications in several countries and designated a machine called Device for the Autonomous Bootstrapping of Unified Sentience (DABUS) as the inventor. Dr. Thaler earned his Ph.D. in Physics from the University of Missouri-Columbia with a thesis that focused on radiation damage in silicon. He is currently the President and CEO of Imagination Engines, Inc., a company known for computational creativity and for working with the U.S. Department of Defense. Dr. Thaler claimed that DABUS acted autonomously as an inventor without help from humans and that DABUS is a “creativity machine.” The first patent application Dr. Thaler filed with DABUS as the inventor was for a food (or drink) container whose shape allows for improved storage and handling characteristics compared to traditional container shapes. The second patent application Dr Thaler filed was for a flashing light beacon that emits a unique flash pattern, making it suitable for search and rescue operations. Dr. Thaler applied for patents under DABUS in several countries including U.S, U.K, South Africa, the EU, and Australia.
Redress for Victims of Generative AI: Copyright Infringement and Right of Publicity Claims
Within the first month of 2024, a series of disturbing stories surfaced. They all had one thing in common: some flavor of generative artificial intelligence (AI) was used to produce content in which a public figure’s voice or likeness was featured without consent. The AI-generated products were circulated widely and rapidly—and in one case, recipients were specifically targeted for dissemination.
The Digital Services Act and the American Web Space
What is the Digital Services Act?
The European Union’s (“EU”) Digital Services Act (“DSA” or the “Act”), adopted by the European Parliament in July 2022, will apply to all platforms operating in the EU beginning in February 2024. Primarily seeking to protect consumer rights within the online space, the DSA will impose robust data privacy measures, implement illegal content reporting, and enhance protections for children on the Internet.
The SEC’s New Rules on Private Equity: Investor Shield or Investment Stranglehold?
Private equity investing is one of the most popular asset classes in finance. As of 2022, $26 trillion worth of capital has been invested through private funds, which includes private equity, venture capital, and hedge funds.
Boot-Scootin' to Financial Freedom: Texas Two-Step Bankruptcy Tactics
Recently, several corporations have used a relatively new strategy to shirk liability and streamline the bankruptcy process. This strategy is known as the “Texas Two-Step.” The first step is a corporate restructuring maneuver, in which a parent company can reorganize and split into two subsidiaries. This process is allowed under Texas Law in the Texas Business Organizations Code as a divisional or divisive merger. Under this Code, the dividing organization must develop and follow a plan for merger, which specifies the distribution of assets and liabilities and a filing with the secretary of state. In a divisional merger where the dividing organization does not survive, all of the dividing organization’s “liabilities and obligations are allocated to one or more of the … new organizations in the manner provided by the plan of merger.” Thus, if a company devises a plan for a divisional merger in which one of the new organizations retains certain liabilities or obligations, and the other new organization retains the business, the new liability-bearing organization is exclusively liable for those liabilities and obligations.
From Nation-States to Cyberspace: Rethinking Sovereignty in the Digital Age
The year is 1648. One of the longest and bloodiest conflicts in European history, the Thirty Years' War, has come to an end. Amidst the capacious battlefields and razed towns, diplomats gather and sign treaties in the German provinces of Münster and Osnabrück. They stitch together the tattered fabric of a war-torn continent with the Peace of Westphalia, and, in doing so, inadvertently establish the foundational tenets of state sovereignty: territorial integrity, non-interference, and sovereign equality. Although rooted in the sociopolitical realities of 17th century Europe, these principles continue to guide international jurisprudence to this day as undercurrents of international humanitarian law (“IHL”).
Regulating the U.S. Government’s Use of Artificial Intelligence
Artificial intelligence (“AI”), specifically large language models such as ChatGPT, are developing rapidly. AI’s immense utility has ensured mass adoption by businesses and the public alike. However, the United States government is only beginning to regulate the internal use of AI. What follows are recent efforts to regulate government use of AI and some major developments expected to take place in the near future.
Signed by President Trump on December 3, 2020, Executive Order 13960 sets out principles for the internal use of AI. Order 13960 requires that the government use of AI shall be lawful, purposeful, and effective. It also requires that the AI used be safe, secure, regularly monitored, and accountable. These principles have vague descriptions but were intended to guide efforts to regulate the internal use of AI to “foster public trust and confidence while protecting privacy, civil rights, civil liberties, and American values, consistent with applicable law. . . .”
The Legal and Financial Implications of Twitter’s Rebrand to “X”
Twitter Co-founder, Jack Dorsey, sent the very first “tweet” on March 21, 2006. The tweet read, “just setting up my twttr.” While Twitter began as a side project stemming from a podcasting tool, the growth and impact of this social media platform went well beyond expectation. After going public in 2013, Twitter’s audience has expanded to over 200 million active users, including notable figures such as Oprah Winfrey and Barrack Obama. Now, 10 years later, Twitter has approximately 528.3 million monetizable monthly active users and is valued at $41.09 billion.
Twitter has been widely identified by its iconic blue bird logo. According to Jack Dorsey, Twitter’s name in fact stems from the sounds that birds make. Since the site initially only allowed 140-character tweets, status updates on Twitter were short bursts of information, akin to chirps from birds.
Expert Money, Amateur Status: The NCAA’s Slow Evolution Towards Professionalism.
In June 2021, the Supreme Court ruled in NCAA vs. Alston ("Alston") that the NCAA could no longer stop member schools from offering education-related benefits to student athletes. The NCAA quickly deemed that the ruling allowed student athletes to profit off their name, image, and likeness, otherwise known as “NIL”. Following the Court’s decision, the NCAA released a four-point guidance on their NIL policy. The four points stated:
• Individuals can engage in NIL activities that are consistent with the law of the state where the school is located. Colleges and universities may be a resource for state law questions.
• College athletes who attend a school in a state without an NIL law can engage in this type of activity without violating NCAA rules related to name, image, and likeness.
• Individuals can use a professional services provider for NIL activities.
• Student-athletes should report NIL activities consistent with state law or school and conference requirements to their school.
The Future of Generative AI and Copyright Law
In recent years, artificial intelligence (AI) has taken the world by storm. It has transformed several industries, such as healthcare, finance, marketing, and entertainment. However, the explosion of AI has also come with a surge of litigation and policy changes. Because of AI’s capability to gather massive amounts of existing data and produce new output, many artists, companies, and individuals that own copyrighted works are suing the creators of AI generators. This development has forced the U.S. government (the “Government”) to address issues relating to AI and copyright to ensure that individuals who own copyrighted works are protected.
ChatGPT and the Emergence of Accessible and Potent AI Capabilities
Merriam-Webster defines artificial intelligence (“AI”) as, “a branch of computer science dealing with the simulation of intelligent behavior in computers,” or, “the capability of a machine to imitate intelligent human behavior.” Since the 1950s, various computer scientists have developed theories to research, develop, and implement artificial intelligence for the benefit of society, and have been pressed with deep philosophical and ethical questions along the way.