Interview with Tiffany Georgievski: AI's Role in the Legal Landscape

Tiffany Georgievski, AI Attorney at Sony Research

Tiffany Georgievski, a leading AI Attorney at Sony Research, sat down with us to discuss the transformative power of AI in the legal domain and the challenges it presents.

"AI can enhance efficiencies and revolutionize the legal industry," Georgievski began, her eyes reflecting the screens of code running in the background. "But I don't anticipate lawyers being entirely replaced. Especially technology lawyers in the AI domain, as many answers aren't predefined."

The legal profession, like many others, is on the cusp of an AI-driven evolution. While some fear the replacement of human jobs, Georgievski offers a more nuanced perspective. "The distinction will lie between lawyers who leverage AI tools and those who don't," she explained.

As we delved deeper into the conversation, the topic of technology companies and their utilization of AI surfaced. "The rise of large language models has spurred innovation, especially in internal processes and product enhancements," Georgievski noted. She highlighted the integration of AI into customer service and online gaming as particularly noteworthy developments.

Education, another sector ripe for disruption, was not far from our discussion. Georgievski optimism was palpable. "AI's promise lies in its adaptability and personalization. A virtual tutor attuned to a student's needs can revolutionize learning." However, she was quick to add a note of caution, emphasizing the importance of independent research. "Students should be encouraged to conduct independent research and fact-check, even when using AI outputs."
Cybersecurity, a domain that has seen rapid advancements and challenges, is also feeling the effects of AI. "AI presents both challenges and solutions in cybersecurity," Georgievski stated, her tone turning serious. "While it can be used maliciously, it can also detect and counteract threats." She expressed particular concern for individuals without cybersecurity expertise, especially given the rise of voice cloning scams.

Reflecting on the broader technological landscape, Georgievski mused, "AI is likely the most profound technological advancement of my era." Drawing parallels with the music industry's transformation due to digital music, she highlighted AI's versatility and the unique regulatory challenges it presents.

The conversation took a philosophical turn as we touched upon the concept of creation and authorship in the age of AI. "While the Copyright Office asserts that AI cannot be an author, the real challenge lies in determining what qualifies for copyright protection," Georgievski pondered. She alluded to the evolving nature of creation, citing the example of electronic music and its impact on our understanding of what constitutes 'creation'.

As our conversation drew to a close, it was evident that the legal world, like many others, stands at the crossroads of tradition and innovation. With thought leaders like Tiffany Georgievski at the helm, the journey ahead promises to be both challenging and rewarding.


AI FORUM:

What if an AI tool itself started filing copyright registrations saying “I'm the author,” which implies a sense of agency to the tool that we haven't seen yet?

Georgievski:

We have to think about the implications of something that can generate billions of works and the fact that copyright offers a monopoly of rights to the owner.  What would that mean if an AI tool decided to create every permutation of a cat picture? I have heard of some initiatives where some people are trying to create every musical composition.

AI FORUM:

We used to use the metaphor that if you gave 100 monkeys typewriters and stuck them in a room, they would eventually create the works of William Shakespeare . . .

Georgievski:

There's actually a Simpsons reference where they have that!


AI FORUM:

Yes, there is.  Well, thank you so much for chatting with us today and enlightening us on these issues!


AI FORUM:  

AI has been hyped quite a bit and some people would say it's been overhyped.  To start with a big hypothetical question, will AI-enabled robots be the lawyers and other professionals of the future?


Tiffany Georgievski, AI Attorney, Sony Research:

I hope not. I think that AI will bring a lot of efficiencies and new processes to help revolutionize the legal industry, particularly to make research more efficient. But I don't think that lawyers will get replaced whole cloth, particularly technology lawyers who are focused in the AI space, because a lot of those answers aren't yet written. The distinction we should make is between lawyers who understand how to use AI tools to produce their work product more efficiently versus lawyers who don't,.


AI FORUM:

Do you do you see technology companies as a whole making use of AI right now?


Georgievski:

Yes, I think it’s happening across the board, particularly with large language models. There’s a lot of innovation regarding how AI can be used within organizations for internal and back-end processes. I think we'll also see it being integrated into enhancing features in products and services.  You can already imagine customer service support operations utilizing these technologies.  I’m excited to see valuable add-on features in other spaces, such as enhancing online gaming experiences. 


AI FORUM:

What’s your view on AI applications in the field of education in terms of personalized tutorials and individualized learning?


Georgievski:

The great promise of what we're seeing with many of these AI technologies is the ability to adapt or to be personalized. I think we know that from a research and education standpoint that a “one size fits all” doesn't really work for students and learning.  In terms of individualized learning-- basically being able to have a virtual tutor who can be more attuned to what a student already knows and their learning styles-- I think that's the great.

I still hope that students maintain the ability to learn how to do their own research, and don't just use a large language model to produce those outputs.  There have been recent studies showing  the different political biases between the large language models.  We need to have larger policy conversations about the educational use of large language models and how theses systems influence what students take away.  We also need to ensure that we’re teaching people how to independently fact and to think critically instead of simply relying on the technology.


AI FORUM

Frankly, that has been a challenge since the evolution of the Internet and Wikipedia, where kids learned very quickly to download their book reports from web pages.


Georgievski:

Adults, as well, are having to figure out what is what is real and what is not a real. When it comes to minors, particularly, I think there should be more obligations on providers to be more responsible in terms of how they are releasing products marketed to and used by  students and minors.  Hopefully, we'll continue seeing other types of organizations, like the FTC, get involved in those types of conversations.


AI FORUM

Looking at the global picture, Europe already has published some AI guidelines. Do you have some just general thoughts and perspective on this type of regulation? Is it a better approach to “make the sandbox safe for everybody to play in” or should we adopt more of an American approach, which favors opportunistic regulation?


Georgievski:

I agree that there should be some type of regulation in advance in the sense of trying to address some of the potential harms that we know presently exist with these technologies.  For example, we have seen facial recognition technologies that exploit biases in the training data, and there are concerns when these flawed technologies can be used by law enforcement, without any additional safeguards.  The technology industry is also trying to sort out what the privacy implications are in terms of how AI is built, particularly when it comes to training data.  

In terms of the EU AI Act, which is still being negotiated, it is expected to be finalized by the end of this year, or beginning of 2024.  It will go into effect probably two years after it is finalized. However, even though the Europeans started this process way in advance, given the rate of advancements in AI, it now looks like the AI Act will be implemented too late.  Organizations will need time to adapt to new regulation, and the technology is going to be in a very different place at that time. In the interim, there have been discussions about a US--EU AI code of conduct; essentially,  voluntary commitments from organizations providing AI technologies before the EU AI Act comes into effect.  


AI FORUM:

Let’s shift gears and talk about cybersecurity. There have been experts that have said AI is going to pose this new, much more dangerous cybersecurity threat. Do you have a view on that?


Georgievski:

As with many types of new technology, it's always kind of a game of “Whack a Mole,” right? Because there are powerful open source AI models, anyone has access to these tools--the good guys and the bad guys both have them.  We need to ensure that from a cybersecurity perspective, the good guys also have the access and the training to be able to understand, prevent, and respond to the adversarial attacks and threats. It will be similar to what we’ve seen in the past with other types of techniques around cyber security.   We know that you can use AI to create malware, but on the other side of the coin, it can be used to detect and defend against malware. 

On the cybersecurity front, I’m more concerned about individuals who aren't cybersecurity experts and who won’t have the tools to protect themselves. We've seen this already with voice cloning technologies and the types of scams that are out there that target vulnerable adults.  Educating the public about cybersecurity risks and online scams is going to be a challenge.  I mean, the FTC has been battling that challenge for a while . . .  we've now just agreed on the danger of the “Nigerian Prince” scam.  


AI FORUM

This is a major consumer challenge. 


Georgievski:

This is also going to be a problem in the workplace as well, such as voice cloning a CEO, for example, who calls and orders employees to do a certain task.  We’ll need more workplace training to address this.  Cybersecurity teams within organizations and government will need to be equipped with tools to be able to detect and defend against malicious attacks.


AI FORUM

Another global question is does AI represent another major inflection point in the evolution of technology? 


Georgievski:

Yes, it is going to probably be one of the largest technological breakthroughs of my lifetime.  As an IP lawyer it reminds me of the digitization of music and streaming and how that really shook up the whole ecosystem of the music industry.  What does that mean here?  We need to recall the lessons we've learned from previous technological breakthroughs.  Scholars are going back and reminding us of the controversies that arose over the invention of the camera and then photo editing tools.  These both gave rise to questions of authorship.  Over time, we’ve experienced iterations of new technologies and their associated hype, and we've always managed to figure it out. 

AI is unique in that it isn't necessarily limited to impacting certain industries. You can apply it in a chatbot.  Airplanes might be using AI agents.  I mean, the possibilities are endless, which poses a larger challenge for types of regulation of AI that might make sense.  Should we take a general or a sectoral industry approach?


AI FORUM

The Copyright Office has made some tentative suggestions on the subject of whether you could give authorship to an AI tool. What's your thinking on that?


Georgievski:

I think the Copyright Office is pretty clear that an AI agent or tool in itself cannot be an author. There has to be some type of human creation.  But I think the challenge before the Copyright Office will actually be to determine what qualifies for copyright protection.  Under existing law, the bar for human creativity  is quite low.  Some are arguing that prompt engineering may be sufficient.  We're seeing cases where artists and creators using the technology aren’t sure if they will get protection for a work if they use AI tools.  People will need to get clarity on this because it's such an important point.

This raises issues around “what does it mean to be a creator?” We saw this in electronic music. Back in the day, musicians needed to play instruments to make music.  Now we have DJs who can utilize technology to create music.  Should artists be treated differently because of the technology is different?  I think we’re going to see a new wave of creators in all respective areas of art using AI technologies.

AI FORUM

What if an AI tool itself started filing copyright registrations saying “I'm the author,” which implies a sense of agency to the tool that we haven't seen yet? 

Georgievski:

We have to think about the implications of something that can generate billions of works and the fact that copyright offers a monopoly of rights to the owner.  What would that mean if an AI tool was used to create nearly every permutation of a cat picture? I have heard of some initiatives where some people are trying to create every musical composition. 


AI FORUM

We used to use the metaphor that if you gave 100 monkeys typewriters and stuck them in a room, they would eventually create the works of William Shakespeare . . .


Georgievski

There's actually a Simpsons reference where they have that!


AI FORUM

Yes, there is.  Well, thank you so much for chatting with us today and enlightening us on these issues!


Georgievski

You’re welcome.


Transcribed by https://otter.ai

Previous
Previous

Our Podcast: Responsible AI from The AI Forum

Next
Next

Interview with Nicole Alonso, Co-Founder of Syro