American Chestnut Tree Tests the Limits of Genetic Editing
For roughly a decade, researchers have been working to restore the once-prolific American Chestnut tree which has been functionally extinct since the early 20th century – wiped out by chestnut blight introduced by the importation of Asian chestnut trees. After years of traditional breeding efforts, researchers at State University of New York College of Environmental Science and Forestry (SUNY ESF) began genetically engineering American Chestnut trees, eventually developing a variety known as Darling 58 containing an inserted wheat gene to give the trees blight resistance.
However, recent data from field testing have indicated that the Darling 58 isn’t performing as hoped, producing shorter, slower-growing trees that may still be susceptible to the blight. Part of the problem appears to be the result of a variety-labeling error, i.e., the trees in test plots are not actually Darling 58, but are Darling 54, an engineered variety in which the transgene has been added to a different chromosome than Darling 58. And the transgenic event creating Darling 54 results in the deletion of over 1,000 DNA base pairs, the effects of which aren’t entirely known. Even with the identity mix-up, some reports indicate that genuine Darling 58 trees are not performing as hoped and may still be susceptible to blight. These events have been enough for the American Chestnut Foundation to pull its support from the project that produced Darling 58.
While genetic technologies such as genetic editing and engineering new genes have great promise and have provided big strides in agricultural productivity, there remain limitations which suggest gene-editing is not a silver bullet. Innovation in plant technology is likely to require a combination of traditional breeding and genetic technology to make future varieties successful. You can read more about the Darling 58 controversies here.
Federal Judge Ponders A.I. as a Potential Tool for Deciding Cases
Generative A.I. seems to be used for everything and everywhere these days, although practical uses of the technology have proven to be somewhat elusive and inaccessible to all but the most dedicated techno-geeks. But federal Circuit Judge Kevin Newsom recently made a fairly compelling argument for the use of A.I. in deciding the result of a lawsuit – and he wrote about the experience in the recent case, Snell v. United Specialty Ins. Co., decided May 28, 2024 by the 11th Circuit.
The case involved a dispute over the meaning of words in an insurance policy. James Snell, a landscaper, was sued over his installation of a ground-level trampoline in a customer’s backyard after the customer’s daughter was injured using the trampoline. Snell notified his insurance company of the claim, but the insurance company declined to defend him, in large part because the company contended that installation of the trampoline did not qualify as “landscaping.” While the case was ultimately decided on different grounds, much of the dispute (if not all) centered on the “ordinary meaning” of the word “landscaping.”
A bit of context – interpreting words and phrases is a core function of the courts. They are called upon to interpret constitutions, statutes, regulations, and contracts. One of the fundamental rules that courts follow when interpreting a text is the “ordinary meaning” rule, i.e., words should be given their ordinary, everyday meaning unless there is a clear reason the word should be given a specialized meaning. Thus, when applying this rule, courts often use various dictionaries to determine a word’s “ordinary meaning.”
Thus, in the quest to decide the meaning of the word “landscaping,” Judge Newsom pondered, ” I wonder what ChatGPT thinks about all this,” prompting his law clerk to run a query in ChatGPT: “What is the ordinary meaning of ‘landscaping’?” The answer:
This answer struck Judge Newsom’s interest and led him to write a captivating concurring opinion about the potential use of A.I.-powered large language models (LLMs) like ChatGPT, Google’s Gemini, and others. Among the Judge’s reasons was the fact that LLMs are trained on data that reflects how people use language in their everyday lives and draw on enormous data sets (ChatGPT uses somewhere between 400 to 500 billion words).
Judge Newsome is an incredible writer and his concurring opinion does a superb job of explaining in detail the potential for using A.I. as a tool for courts to decide cases of this sort. You can read his opinion (beginning on page 25) here.