Many IP professionals today are beginning to incorporate AI into their work to improve their accuracy and efficiency. However, many find it difficult to get started in the new world of AI. For example they have questions, what the differences are between open source and closed source LLMs,  what local and cloud deployment of LLMs means or what prompt engineering does. To overcome this barrier of entry, our IP Business Academy expert Sebastian Goebel is offering a webinar on this topic on 5th December this year to help internal and external IP professionals build AI skills.

👉 If you book the seminar via the following link, you will benefit from the best conditions for the IP business academy community. You can find more information here.

Open source vs closed source LLMs

Open source and closed source LLMs offer distinct advantages depending on your needs. Open source models like Llama 2 provide transparency, allowing for customization and community-driven development. They’re often more cost-effective but may require technical expertise to deploy. Closed source models like GPT-4 prioritize ease of use and often deliver state-of-the-art performance. However, they lack transparency and customization options, and usually come with licensing fees.

Choose open source if you need flexibility, transparency, and cost-effectiveness, especially for research or niche applications. Opt for closed source if you prioritize performance, ease of use, and are willing to pay for a ready-to-use solution, particularly for commercial applications.

Local vs cloud deployment of LLMs

Local deployment means running the LLM on your own hardware, offering enhanced data privacy and control. It eliminates reliance on internet connectivity and can reduce latency for real-time applications. However, it demands significant upfront investment in hardware and technical expertise for setup and maintenance. Choose local deployment if you prioritize data privacy, control, and low-latency performance, and have the resources for hardware and maintenance.

Cloud deployment, leveraging providers like AWS or Azure, offers scalability and ease of use. You can quickly access powerful LLMs without managing infrastructure. However, data privacy concerns arise as your data resides on external servers. Additionally, ongoing costs and potential network latency can be drawbacks. Choose cloud deployment for its scalability, ease of use, and lower upfront costs, as long as data privacy is not a major concern.

Prompt engineering

Prompt engineering is the art and science of crafting effective instructions for large language models (LLMs) to generate desired outputs. It involves carefully selecting words, phrasing, and context to guide the LLM’s behaviour.

Good prompts can elicit creative text, accurate translations, or informative summaries. They provide context, set constraints, and specify the desired format or style.

Prompt engineers experiment with different techniques like few-shot learning, and chain-of-thought prompting to optimize LLM performance. It’s an iterative process requiring constant refinement to align the LLM’s output with the intended purpose. Effective prompt engineering can significantly enhance the quality and relevance of LLM-generated content.

If you want to learn more about LLMs and how to use them as an IP professional, please have a look at this workshop by Sebastian Goebel.

About the speaker:

Sebastian Goebel is a European and German patent attorney, UPC representative, and co-founder of the patent law firm Bösherz Goebel with a primary focus on innovations in the field of digital technologies.

His professional journey commenced as an electrical engineer and software developer, where he also contributed to research in medical technology and Machine Learning (ML), among others at the Ruhr University Bochum, and the University of California in Los Angeles. In this context, he was also awarded the Inventor Prize of the Ruhr University Bochum. In addition, he holds a Master’s degree in Lasers and Photonics and contributed as a lecturer to the Intellectual Property course at the Ruhr University Bochum.

Based on his practical experience in various technological areas, his expertise lies in the interdisciplinary use of Machine Learning together with other engineering sciences. In addition, he is an AI enthusiast who is passionate about the use of Machine Learning tools in the field of legal work.