AI lab by information labs
information labs
All episodes
Best episodes
Top 10 AI lab by information labs Episodes
Goodpods has curated a list of the 10 best AI lab by information labs episodes, ranked by the number of listens and likes each episode have garnered from our listeners. If you are listening to AI lab by information labs for the first time, there's no better place to start than with one of these standout episodes. If you are a fan of the show, vote for your favorite AI lab by information labs episode by adding your comments to the episode page.
AI lab TL;DR | Žiga Turk - Brussels is About to Protect Citizens from Intelligence
AI lab by information labs
04/22/24 • 10 min
🔍 In this TL;DR episode, Professor Žiga Turk (University of Ljubljana, Slovenia) discusses his recent contribution for the Wilfried Martens Centre for European Studies on how “Brussels is About to Protect Citizens from Intelligence” with the AI lab
📌 TL;DR Highlights
⏲️[00:00] Intro
⏲️[01:55] Q1 - Why do you think AI regulation prioritises limiting risks over promoting innovation and freedom of expression? How can governments balance security and privacy with technological innovation?
⏲️[05:13] Q2 - You view AI as a 'general technology' that shouldn't be specifically regulated, advocating for technology-neutral laws. What does this mean in practice?
⏲️[09:46] Wrap-up & Outro
🗣️ The mistake is to try to regulate technology, it is behaviours that have to be regulated. (...) If politicians (...) go about regulating every new technology that appears, they will always be behind the curve.
🗣️ The even bigger danger is [the] kind of chilling effect [AI regulation] would have for European industries, people and businesses who will not have access to the latest and greatest AI tools (...).
🗣️ Some AI tools are coming to European customers with a delay or not at all. This puts the whole European economy, its citizens [and] its scientists at a disadvantage with their competition.
🗣️ Investors would be hesitant. Do I want to invest in [AI] in Europe, which is so tightly regulated?
🗣️ I don't think it matters whether you make a deepfake with Photoshop or let AI do it. If deepfakes need to be labelled, they should be labelled regardless of the technology.
🗣️ Admire (...) the thinkers and politicians of the Enlightenment era (...) [for] not going “[the printed press] will create all kinds of unacceptable risks, we have to regulate ex-ante (...)”. Instead, they created (...) legislation on freedom of expression.
🗣️ In the early days (...), the US created regulation that actually freed Internet companies from some potential dangers of hosting user content on their platforms, which created this whole Internet industry and creativity around platforms.
📌 About Our Guest
🎙️ Žiga Turk | Professor, University of Ljubljana (Slovenia)
X https://twitter.com/@zigaTurkEU
🌐 Wilfried Martens Centre for European Studies - Brussels is About to Protect Citizens from Intelligence
https://www.martenscentre.eu/blog/brussels-is-about-to-protect-citizens-from-intelligence/
🌐 Regulating artificial intelligence: A technology-independent approach. European View, 23(1), 87-93
https://doi.org/10.1177/17816858241242890
🌐 Prof. Žiga Turk
https://www.zturk.com/p/english.html
Žiga Turk is a Professor at the University of Ljubljana (Slovenia) and a member of the Academic Council of the Wilfried Martens Centre for European Studies. He holds degrees in engineering and computer science. Prof. Turk was Minister for Growth, as well as Minister of Education, Science, Culture and Sports in the Government of Slovenia and Secretary General of the Felipe Gonzalez Reflection Group on the Future of Europe. As an academic, author and public speaker, he studies communication, internet science and scenarios of future global developments, particularly the role of technology and innovation.
1:1 with Alina Trapova
AI lab by information labs
06/15/23 • 28 min
In this podcast Alina Trapova (UCL Faculty of Laws) & the AI lab ‘decrypt’ Artificial Intelligence from a policy making point of view
📌Episode Highlights
⏲️[00:00] Intro
⏲️[01:07] The TL;DR Perspective
⏲️[10:05] Q1 - The Deepdive: AI Decrypted | You consider that the way copyright relevant legislation is looked at currently by EU legislators benefits only a limited number of cultural industries. Can you expand on that?
⏲️[17:34] Q2 - The Deepdive: AI Decrypted | In the AI Act, the EP slid Article 28b sub 4c, relating to transparency. Knowing that it is not that obvious to identify what is copyrighted and what isn’t, do you think this can even be done?
⏲️[22:31] Q3 - The Deepdive: AI Decrypted | You encourage legislators to be cautious when looking at regulating an emerging digital technology. Where do you see a risk of using an elephant gun to kill a fly?
⏲️[26:47] Outro
📌About Our Guest
🎙️ Dr Alina Trapova | Lecturer in Intellectual Property Law, UCL Faculty of Laws
🐦 https://twitter.com/alinatrapova
🌐 European Parliament AI Act Position Put to Vote on 14 June, 2023
🌐 European Law Review [(2023) 48] | Copyright for AI-Generated Works: A Task for the Internal Market?
🌐 Kluwer Copyright Blog | Copyright for AI-Generated Works: A Task for the Internal Market?
🌐 Institute of Brand and Innovation Law (UCL, University College London)
🌐 Dr Alina Trapova
Dr Alina Trapova is a Lecturer in Intellectual Property Law at University College London (UCL) and a Co-Director of the Institute for Brand and Innovation Law. Alina is one of the Co-Managing editors at the Kluwer Copyright Blog. Prior to UCL, she worked as an Assistant Professor in Autonomous Systems and Law at The University of Nottingham (UK) and Bocconi University (Italy). Before joining academia, she has worked in private practice, as well as the EU Intellectual Property Office (EUIPO) and the International Federation for the Phonographic Industry (IFPI).
AI lab - Teaser
AI lab by information labs
06/01/23 • 0 min
The AI lab podcast will be launched in June 2023 with the aim of "decrypting" expert analysis to understand Artificial Intelligence from a policy making point of view.
AI lab - AI in Action | Episode 01: AI History
AI lab by information labs
05/07/24 • 9 min
We are kickstarting our AI in Action series by diving headfirst into the key milestones that led to the gradual deployment of Artificial Intelligence, or AI for short. You might think it's some shiny new invention, looking at all the recent media coverage about robots taking over your jobs and writing bad poetry. But hold on to your Roomba, because AI has been around longer than your grandma’s pocket calculator.
Read more & grab the infographic of this timeline here:
https://informationlabs.org/ai-lab-ai-in-action-episode-01-ai-history/
AI lab hot item | Michiel Van Lerbeirghe (ML6) - Copyright Transparency: An AI Firm’s Perspective
AI lab by information labs
11/14/23 • 10 min
🔥 In this 'Hot Item', Michiel Van Lerbeirghe (ML6) & the AI lab explore how the push for copyright transparency in the EU AI Act’s could impact smaller European AI providers and how we can move towards a practical solution
📌Hot Item Highlights
⏲️[00:00] Intro
⏲️[00:45] Michiel Van Lerbeirghe (ML6)
⏲️[08:28] Wrap-up & Outro
🗣️ Copyright protection is subjective: it is definitely not up to providers of foundation models to rule whether the criteria are met. However, under the current version of the AI Act, they would be required to make that assessment.
🗣️ The current obligation regarding copyright [transparency] is almost impossible to comply with. (...) The obligation is still under review, and we hope that we can evolve to a mechanism that makes more sense.
🗣️ While transparency is definitely a good thing that should be supported, (...) the upcoming [copyright transparency] obligation could prove to be very difficult, and not to say impossible, to comply with.
🗣️ Copyright can actually go very far and a lot of different content can potentially be protected by copyright. (...) From a practical point of view: where would the [transparency] obligation start and where would it end?
📌About Our Guest
🎙️ Michiel Van Lerbeirghe | Legal Counsel, ML6
🌐 Assessing the impact of the EU AI Act proposal (ML6 Blog Post)
🌐 ML6
Michiel is an IP lawyer focusing on artificial intelligence. After working for law firms for multiple years, he recently became the in-house legal counsel for ML6, a leading European service provider building and implementing AI systems for several multinationals.
AI lab TL;DR | Jurgen Gravestein - The Intelligence Paradox
AI lab by information labs
10/21/24 • 14 min
🔍 In this TL;DR episode, Jurgen Gravestein (Conversation Design Institute) discusses his Substack blog post delving into the ‘Intelligence Paradox’ with the AI lab
📌 TL;DR Highlights
⏲️[00:00] Intro
⏲️[01:08] Q1-The ‘Intelligence Paradox’:
How does the language used to describe AI lead to misconceptions and the so-called ‘Intelligence Paradox’?
⏲️[05:36] Q2-‘Conceptual Borrowing’:
What is ‘conceptual borrowing’ and how does it impact public perception and understanding of AI?
⏲️[10:04] Q3-Human vs AI ‘Learning’:
Why is it misleading to use the term ‘learning’ for AI processes and what this means for the future of AI development?
⏲️[14:11] Wrap-up & Outro
💭 Q1-The ‘Intelligence Paradox’
🗣️ What’s really interesting about chatbots and AI is that for the first time in human history, we have technology talking back at us, and that's doing a lot of interesting things to our brains.
🗣️ In the 1960s, there was an experiment with Chatbot Eliza, which was a very simple, pre-programmed chatbot (...) And it showed that when people are talking to technology, and technology talks back, we’re quite easily fooled by that technology. And that has to do with language fluency and how we perceive language.
🗣️ Language is a very powerful tool (...) there’s a correlation between perceived intelligence and language fluency (...) a social phenomenon that I like to call the ‘Intelligence Paradox’. (...) people perceive you as less smart, just because you are less fluent in how you’re able to express yourself.
🗣️ That also works the other way around with AI and chatbots (...). We saw that chatbots can now respond in extremely fluent language very flexibly. (...) And as a result of that, we perceive them as pretty smart. Smarter than they actually are, in fact.
🗣️ We tend to overestimate the capabilities of [AI] systems because of their language fluency, and we perceive them as smarter than they really are, and it leads to confusion (...) about how the technology actually works.
💭 Q2-‘Conceptual Borrowing’
🗣️ A research article (...) from two professors, Luciano Floridi and Anna Nobre, (...) explaining (...) conceptual borrowing [states]: “through extensive conceptual borrowing, AI has ended up describing computers anthropomorphically, as computational brains with psychological properties, while brain and cognitive sciences have ended up describing brains and minds computationally and informationally, as biological computers."
🗣️ Similar to the Intelligence Paradox, it can lead to confusion (...) about whether we underestimate or overestimate the impact of a certain technology. And that, in turn, informs how we make policies or regulate certain technologies now or in the future.
🗣️ A small example of conceptual borrowing would be the term “hallucinations”. (...) a common term to describe when systems like chatGPT say something that sounds very authoritative and sounds very correct and precise, but is actually made up, or partly confabulated. (...) this actually has nothing to do with real hallucinations [but] with statistical patterns that don’t match up with the question that’s being asked.
💭 Q3-Human vs AI ‘Learning’
🗣️ If you talk about conceptual borrowing, “machine learning” is a great example of that, too. (...) there's a very (...) big discrepancy between what learning is in the psychological terms and the biological terms when we talk about learning, and then when it comes to these systems.
🗣️ So if you actually start to be convinced that LLMs are as smart and learn as quickly as people or children (...) you could be over attributing qualities to these systems.
🗣️ [ARC-AGI challenge:] a $1 million USD prize pool for the first person that can build an AI to solve a new benchmark that (...) consists of very simple puzzles that a five-year old (...) could basically solve. (...) it hasn't been solved yet.
🗣️ That’s, again, an interesting way to look at learning, and especially where these systems fall short. [AI] can reason based on (...) the data that they've seen, but as soon as it (..) goes out of (...) what they've seen in their data set, they will struggle with whatever task they are being asked to perform.
📌 About Our Guest
🎙️ Jurgen Gravestein | Sr Conversation Designer, Conversation Design Institute (CDI)
X https://x.com/@gravestein1989
🌐 Blog Post | The Intelligence Paradox
https://jurgengravestein.substack.com/p/the-intelligence-paradox
🌐 Newsletter
https://jurgengravestein.substack.com
🌐 CDI
https://www.conversationdesigninstitute.com
🌐 Profs. Floridi & Nobre's article
http://dx.doi.org/10.2139/ss...
AI lab TL;DR | Derek Slater - What the Copyright Case Against Ed Sheeran Can Teach Us About AI
AI lab by information labs
05/30/24 • 13 min
🔍 In this TL;DR episode, Derek Slater (Proteus Strategies) discusses his recent blog post on the Tech Policy Press website, titled “What the Copyright Case Against Ed Sheeran Can Teach Us About AI”, with the AI lab
📌 TL;DR Highlights
⏲️[00:00] Intro
⏲️[01:11] Q1 - Legal boundaries & creativity:
How to define the boundary between protectable expression and unprotectable building blocks in music & other creative fields?
What lessons does this offer for generative AI?
⏲️[05:13] Q2 - Consent vs. enclosure:
What is enclosure?
How can we balance it with consent in regulating AI tools?
What guiding principles should policymakers follow to not stifle innovation & creativity?
⏲️[09:35] Q3 - Technological impact on art:
What is the long-term impact of generative AI on music & artistic expression, as other technological advances ultimately revolutionised creative industries after an initial backlash?
⏲️[12:18] Wrap-up & Outro
💭 Q1 - Legal boundaries & creativity
🗣️ All creativity builds on the past. All songs are made up of a limited number of notes and chords available to the composers [and] to protect their combination would give Let’s Get It On an impermissible monopoly[, the judge said].
🗣️ Copyright has always allowed certain uses of existing content (...) by drawing lines between protectable expression and unprotectable ideas, facts, and other elements.
🗣️ Rightsholders can demand consent for some uses, but they are not allowed to enclose and cut off the basic building blocks of culture and knowledge.
🗣️ Generative AI: (...) it’s a big statistical analysis of lots and lots of texts to derive rules about syntax and how different concepts are related (...) For music, it’s analysing lots and lots of music to tease out those basic building blocks.
🗣️ [AI training] can’t be reduced to the simplicity of consent (...) because the question is: consent for what? (...) Deriving insights [and] uncopyrightable elements from protectable expression generally can be permissible.
💭 Q2 - Consent vs. enclosure
🗣️ We also recognise downsides to [copyright], (...) meaning the public can no longer freely build on and use it. (...) We’ve always had copyright protection but also limits so that enclosure (...) doesn’t go too far.
🗣️ When is it unethical to stop people from (...) using basic building block[s] of language or music? Because that information, that knowledge, those cultural artefacts, ought to belong to the public.
🗣️ I think from a copyright perspective, the first key principle is: is this protection necessary to encourage creativity (...)? If creativity is already booming, abundant, and would happen anyway (...) then there should not be an issue.
🗣️ When we think about generative AI, these are tools for productivity, for creativity, not for piracy (...). They’re not about simply reusing the works that they were trained on in the outputs. (...) That’s considered a bug, a failure (...) and something to be avoided.
🗣️ When somebody uses [an AI ] tool like Suno or Udio to create a new song, that’s in line with copyright’s purpose. (...) It crosses the line (...) where that output is directly substituting, reusing that communicative expression embodied in some specific work.
💭 Q3 - Technological impact on art
🗣️ One way to think about [AI] is sort of like the synthesizer, computer-generated graphics or Photoshop, where, at first, people said, this is not music, [or] art, and over time, it became integrated into artistic processes in a variety of ways.
🗣️ [2023] Oscar winner, ’Everything Everywhere All at Once’, used the generative AI tool Runway to edit one of its famous scenes. Nobody knew that was generative AI at the time. Nobody said ‘Oh, this is a generative AI movie’, but it was part of their artistic process.
🗣️ It’s acknowledged that generative AI is driving an abundance of creativity. (...) So that fundamentally is not at odds with (...) copyright. I think most of the concerns that people have are...
AI lab TL;DR | Elisa Giomi - The Unacknowledged AI Revolution in the Media & Creative Industries
AI lab by information labs
06/18/24 • 18 min
🔍 In this TL;DR episode, Dr Elisa Giomi, Associate Professor at the Roma Tre University and Commissioner of the Italian Communications Regulatory Authority (AGCOM), discusses her recent contribution on Intermedia, the journal of the International Institute of Communications (IIC), titled “The (almost) unacknowledged revolution of AI in the media and creative industries”, with the AI lab
📌 TL;DR Highlights
⏲️[00:00] Intro
⏲️[01:15] Q1 - AI’s impact vs. past revolutions:
How does AI’s impact on media and creative industries compare to historical technological revolutions?
⏲️[05:22] Q2 - Navigating AI in media:
How should we balance AI’s benefits in combating misinformation vs its potential risks?
⏲️[11:18] Q3 - Balancing copyright & AI:
You state that: “[AI] and human intelligence follow [a] not dissimilar logic. So we should not use a double standard to regulate them”. What should a balanced approach to copyright in AI look like?
⏲️[17:48] Wrap-up & Outro
💭 Q1 - AI’s impact vs. past revolutions
🗣️ The [AI] revolution (...) in the media and creative industries, as many previous ones, will probably be declared a revolution only long after it happened.
🗣️ AI in the media sector[:] Its disruptive effect goes unnoticed (...), [and] the media and creative industries remain under the radar in the public debate, since they are not among the leading adoption fields.
🗣️ Two of the winners of the last Pulitzer Prize for journalism admitted using AI systems in their investigation and getting so many benefits from AI.
🗣️ Why the AI revolution looks like the main technological revolutions of the past? Its ability to divide [and] polarise, the public debate between enthusiasts (...) and radical opponents (...).
💭 Q2 - Navigating AI in media
🗣️ Every technological innovation has been accompanied by a sort of squinting effect which leads to amplifying the distorted uses to the detriment of the more abundant beneficial applications.
🗣️ Demonising AI for fear of its side effects would be as if in the past we had refused to switch from the plough to the tractor for fear that the tractor could pollute or run over people and animals.
🗣️ AI is not only used to produce fake news and misleading content, but also in fact checking and identifying deepfakes. It is used in fighting disinformation .
🗣️ Only by taking into account opportunities and risks at the same time, we will be able to develop a balanced regulation and avoid emergency and radical responses in the wake of moral panics produced by AI misuses.
🗣️ The media (...) are likely to shape our perception of the world and to guide other choices, so they should have been included in the [EU AI Act] high-risk sectors.
💭 Q3 - Balancing copyright & AI
🗣️ I have strong misgivings about the remuneration hypothesis[:] (...) it privileges publishers over any other content producers.
🗣️ I’m not sure having different rules for the human and artificial mind makes sense. My conclusion here is that maybe it’s too early to find a solution to the copyright problems raised by AI.
🗣️ Any balanced resolution must have two starting points: first, a rigorous analysis of the real value chain (...), and second, (...) [a] precise diagnosis. (...) Regulate only when there is a [real] pathology to be healed.
📌 About Our Guest
🎙️ Dr Elisa Giomi | Associate Professor at Roma Tre University & Commissioner of the Italian Communications Regulatory Authority (AGCOM)
🌐 International Institute of Communications (IIC) - The (Almost) Unacknowledged Revolution of AI in the Media and Creative Industries
🌐 AGCOM - Dr Elisa Giomi
https://www.agcom.it/elisa-giomi
Dr Elisa Giomi is an associate professor at Roma Tre University, Department of Philosophy, Communication and Performing Arts, and a commissioner of AGCOM, the...
1:1 with Pamela Samuelson
AI lab by information labs
01/17/24 • 38 min
In this podcast Pamela Samuelson (UC Berkeley School of Law) & the AI lab ‘decrypt’ Artificial Intelligence from a policy making point of view
📌Episode Highlights
⏲️[00:00] Intro
⏲️[02:59] Q1 - The Deepdive: AI Decrypted | What significant practical obstacles in complying with a transparency obligation about copyrighted works in training data do you identify?
⏲️[10:50] Q2 - The Deepdive: AI Decrypted | Looking at the disassembly or tokenization in the training process, can you explain why “generative AI models are generally not designed to copy training data; they are designed to learn from the data at an abstract and uncopyrightable level”?
⏲️[18:58] Q3 - The Deepdive: AI Decrypted | On generative AI outputs: 1) why is the idea that an AI could or should be recognised as author problematic, and 2) could prompts be detailed enough to meet the threshold of authorship?
⏲️[26:30] Q4 - The Deepdive: AI Decrypted | On licensing AI input your submission states: “(...) it will be impossible under current technologies to calibrate payments made under a collective licensing arrangement to actual usage of individual authors’ works.” What’s at stake?
⏲️[35:37] Outro
🗣️ A rule that (...) you have to keep very, very accurate records about what your training datasets are (...) is just (...) impractical if you care about (...) a large number of people instead of a few big companies being able to participate in the (...) generative AI space.
🗣️ Data basically is in a certain form in the in-copyright works that are part of the training data but the model does not embody the training data in a recognisable way. (...) It's just not the way we think about the component elements of copyright works.
🗣️ If you think [licensing] will mean that authors will be able to continue to make a living, we're talking about really small change here in terms of each author's entitlement. It's not like you're going to get $10,000 or $50,000 a year.
🗣️ The collective license idea doesn't pay attention to (...) that we're talking about billions of works, (...) billions of authors, (...) a lot of things that essentially have no commercial value.
🗣️ [Collective licensing:] it's so impractical that it's just not really feasible. (...) No question that collecting societies would (...) be the big beneficiaries of this, not the authors.
🗣️ If a voluntary licensing regime works (...), I think that's fine. (...) [A] mandate that everything be licensed (...) is kind of unrealistic.
📌About Our Guest
🎙️ Pamela Samuelson | Richard M. Sherman Distinguished Professor of Law and Information, UC Berkeley School of Law
X https://twitter.com/PamelaSamuelson
🌐 Comments in Response to the U.S. Copyright Office’s Notice of Inquiry on Artificial Intelligence and Copyright by Pamela Samuelson, Christopher Jon Sprigman, and Matthew Sag (30 October 2023)
🌐 U.S. Copyright Office Issues Notice of Inquiry on Copyright and Artificial Intelligence
🌐 Allocating Ownership Rights in Computer-Generated Works (Pamela Samuelson, 1985)
🌐 Common Crawl
🌐 Shutterstock Expands Partnership with OpenAI, Signs New Six-Year Agreement to Provide High-Quality Training Data
🌐 Prof Pamela Samuelson
Pamela Samuelson is the Richard M. Sherman Distinguished Professor of Law and Information at UC Berkeley. She is recognized as a pioneer in digital copyright law, intellectual property, cyberlaw and information policy. Professor Samuelson is a director of the internationally-renowned Berkeley Center for Law & Technology. She is co-founder and chair of the board of Authors Alliance, a nonprofit organization that promotes the public interest in access to knowledge. She also serves on the board of directors of the Electronic Frontier Foundation, as well as on the advisory boards for the Electronic Privacy Information Center, the Center for Democracy & Technology, and Public Knowledge. Professor Samuelson has written and published extensively in the areas of copyright, software protection and cyberlaw, with recent publications looking into the possible i...
AI lab TL;DR | Nuno Sousa e Silva - Are AI Models’ Weights Protected Databases?
AI lab by information labs
03/12/24 • 10 min
🔍 In this TL;DR episode, Assistant Professor Nuno Sousa e Silva (Universidade Católica Portuguesa) discusses his recent Kluwer Copyright Blog contribution, “Are AI Models’ Weights Protected Databases?”, with the AI lab.
📌 TL;DR Highlights
⏲️[00:00] Intro
⏲️[01:41] Q1 - What are weights in an AI model?
⏲️[05:14] Q2 - Why could the EU Database Directive apply to AI models in certain cases, and what would the consequences be?
⏲️[09:23] Wrap-up & Outro
🗣️ Models are basically tools that humans use to simplify the real world, to boil it down, to describe it, and the way that this is done is through mathematical functions.
🗣️ Weights are nothing but a set of numerical values that represent the strength of the connection of neurons in a neural network.
🗣️ [The Database Directive’s] aim is to protect the investment in the creation, presentation, and verification of data, so basically data products and the producer of data products. Admittedly, this had no AI models in mind.
🗣️ We know how much money and effort is put into developing [an AI] model and that the model is really the weights.
🗣️ For EU-based companies that qualify, that means that they have a right to control the reuse or extraction of a substantial part of that database, in other words (...): a right to control the use of the model beyond contractual rules.
🗣️ Some people say that if we want to talk about open source in AI, it needs to be both the disclosure of the training set and the model weights.
📌 About Our Guest
🎙️ Nuno Sousa e Silva | Lawyer (Partner @ PTCS) & Law Professor (Universidade Católica Portuguesa)
🌐 Kluwer Copyright Blog - Are AI Models’ Weights Protected Databases?
https://copyrightblog.kluweriplaw.com/2024/01/18/are-ai-models-weights-protected-databases/
🌐 EU Database Directive (96/9/EC)
https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A31996L0009
🌐 Nuno Sousa e Silva
Nuno Sousa e Silva is a Lawyer (Partner at PTCS) and a Law Professor. He graduated from the Law School of the Catholic University of Portugal (Porto) and obtained a Master of Laws and a PhD from the same University and holds an LLM degree in Intellectual Property and Competition Law (MIPLC). Nuno acts frequently as an arbitrator, advisor, and legal expert for companies, governments, and international institutions. He published four books and over fifty articles on Intellectual Property, IT Law, EU Law, and Private Law. He has taught and given lectures in Portugal, Germany, Hungary, Poland, Denmark, and the UK.
Show more best episodes
Show more best episodes
FAQ
How many episodes does AI lab by information labs have?
AI lab by information labs currently has 24 episodes available.
What topics does AI lab by information labs cover?
The podcast is about Deep Learning, Podcasts, Technology, Science, Artificial Intelligence, Data Science and Machine Learning.
What is the most popular episode on AI lab by information labs?
The episode title '1:1 with Brigitte Vézina' is the most popular.
What is the average episode length on AI lab by information labs?
The average episode length on AI lab by information labs is 16 minutes.
How often are episodes of AI lab by information labs released?
Episodes of AI lab by information labs are typically released every 16 days.
When was the first episode of AI lab by information labs?
The first episode of AI lab by information labs was released on Jun 1, 2023.
Show more FAQ
Show more FAQ