Generative AI Webinar Series

Register now


Error: Please select at least one option.

Share:

        

Learn the latest in Generative AI

Join our new GenAI webinar series to learn about the latest trends and best practices in generative AI from industry leaders and practitioners.

Error: Please enter a first name.
Error: First name must be at least 2 characters long.
Error: First name must be less than 250 characters long.
Error: Please enter a first name.
Error: Please enter a last name.
Error: Last name must be at least 2 characters long.
Error: Last name must be less than 250 characters long.
Error: Please enter a last name.
Error: Please enter an email address.
Error: Please enter a valid email address.
Error: Email Address must be less than 250 characters.
Error: Please select a country/region.
Error:
Your registration cannot proceed. The materials on this site are subject to U.S. and other applicable export control laws and are not accessible from all locations.
Error: Please enter a company name.
Error: Company name must be at least 2 characters long.
Error: Company name must be less than 250 characters long.
Error: Please enter a company name.
Error: Please select a profession.

Intel strives to provide you with a great, personalized experience, and your data helps us to accomplish this.

Error: Above consent required for submission.
Error: Above consent required for submission.

By submitting this form, you are confirming you are age 18 years or older and you agree to share your personal data with Intel for this business request.

By submitting this form, you are confirming you are age 18 years or older. Intel may contact me for marketing-related communications. To learn about Intel's practices, including how to manage your preferences and settings, you can visit Intel's Privacy and Cookies notices.

By submitting this form, you are confirming you are age 18 years or older. Intel may contact me for marketing-related communications. To learn about Intel's practices, including how to manage your preferences and settings, you can visit Intel's Privacy and Cookies notices.

  • Date
    Name
  • May 8
    Emerging frontiers in GenAl: Agentic Al workflows and Multimodal LLMs

    Emerging frontiers in GenAl: Agentic Al workflows and Multimodal LLMs

    An emerging paradigm in Generative Al is the rise of Agentic Al workflows: Where different Al models act as agents to cooperate, plan and execute to solve complex tasks. These agents can make use of foundation Al models like Large language models (LLMs) and Multimodal LLMs and can do project planning to tool usage and self reflection. Multimodal LLMs are particularly useful when an enterprise has data in modalities other than language such as videos, images, audio recordings, and slides with diagrams, tables, and charts.



    Register now



    Speakers

    Vasudev Lal

    Principal AI Research Science Manager at Intel Labs

    Vasudev leads the Multimodal Cognitive AI team. His team develops AI systems that can synthesize concept-level understanding from multiple modalities: vision, language, video, etc leveraging large-scale AI clusters powered by Intel Gaudi. His current research interests include self-supervised training at scale for continuous and high dimensional modalities like images, video, and audio; mechanisms to go beyond statistical learning in today’s AI systems by incorporating counterfactual reasoning and principles from causality; and exploring full 3D parallelism (tensor + parallel + data) for training and inferencing large AI models on Intel AI HW (eg: Intel Gaudi2/3-based AI clusters in Intel Dev Cloud).

    Vasudev obtained his PhD in Electrical and Computer Engineering from the University of Michigan, Ann Arbor.

    LinkedIn Profile: https://www.linkedin.com/in/vasudev-lal-79bb336/

    Sancha Norris

    Dillion Laird

    Founding Machine Learning Engineer at Landing AI

    Sancha Norris

    Ed Groden

    AI Sales Enabling Manager in Intel’s Sales and Marketing Group

  • On-Demand
    The Beauty of GenAI for Retail

    The Beauty of GenAI for Retail

    How can a retail business adopt generative AI and accelerate its growth? Join e.l.f. Beauty and Iterate.ai to learn how a low-code AI platform can quickly deploy large language models to improve operational efficiency and customer engagement. Not a data scientist? Learn how a low-code platform can augment your AI skills for faster innovation.

    What you will learn:

    • The good and the bad – the reality of generative AI for retail
    • How to choose the right generative AI initiatives for impactful outcomes
    • Best practices on how to build and deploy LLMs
    • RAG and LLM in action for social media application


    Register now



    Ekta Chopra

    Ekta Chopra

    Chief Digital Officer, e.l.f. Beauty

    Bringing over 25+ years of technology experience from private equity and retail companies, Ekta joined e.l.f. Beauty in 2016 as the Chief Digital Officer. She spearheads the digital transformation at e.l.f. Beauty, covering eCommerce, engineering, data ecosystems, enterprise applications, security, Metaverse Web 3 technologies, and, notably, the development of AI-driven products. Before joining e.l.f., Ekta was the Vice President of Technology at Charming Charlie, where she played a critical role in expanding the retailer from 17 to 120 stores across the nation.

    Brian Sathiannathan

    Brian Sathianathan

    Co-founder and CTO of Iterate.ai

    Brian launched his career at Apple, where he spearheaded key projects for iPhone and Intel Mac, securing patents for his work. He founded Avot Media, a video transcoding platform used by Warner Bros, which Smith Micro later acquired. Transitioning to investment, he contributed to 13 investments and acquisitions at Turner Media. Currently, as Co-Founder and CTO of Iterate.ai, Brian leads the development of the Enterprise AI platform Interplay, advancing digital innovation for prominent US retailers and brands.

    Sancha Norris

    Sancha Norris

    AI and Data Science Marketing at Intel

    LinkedIn Profile: https://www.linkedin.com/in/sanchanorris/

  • On-Demand
    Building GenAI Platforms from Scratch

    Building GenAI Platforms from Scratch

    Large Language Models (LLMs) and, more broadly, Generative AI (GenAI), have showcased remarkable versatility across a diverse array of industries and applications. Accenture will share its best practices, considerations, and architectures for constructing a self-managed GenAI platform capable of hosting a myriad of applications.

    Learn how:

    • To utilize the many open-source models to power and customize your generative AI applications.
    • To scale a generative AI POC into production.
    • Intel Xeon, Gaudi2, and software tools, such as the Intel Extension for Transformers and Optimum Habana can optimize your compute and minimize costs.

    Drawing from these fundamental principles, we will conclude the session with practical demonstrations showcasing GenAI workloads and discuss future extensions.



    Register now



    Richard Jiang

    Richard Jiang

    Data Scientist

    Data Scientist in Accenture's Applied Intelligence Division based out of San Francisco. He holds over 7 years of experience building statistical, Machine Learning, and AI-focused solutions for clients over a wide range of industries, including high-tech, utilities, and health care. Over the past year, his focus has been on deep-diving into the inner workings of various open source Generative AI models with the goal of deploying highly customizable Generative AI solutions for clients.

    Sancha Norris

    Sancha Norris

    AI and Data Science Marketing at Intel

    LinkedIn Profile: https://www.linkedin.com/in/sanchanorris/

  • On-Demand
    Prompt-Driven Efficiencies in LLMs

    Prompt-Driven Efficiencies in LLMs

    It’s no secret that Large Language Models (LLMs) come with many challenges. Through prompt economization and in-context learning, we can address two significant challenges: model hallucinations and high compute costs.

    We will explore creative strategies for optimizing the quality and compute efficiency of LLM applications. These strategies not only make LLM applications more cost-effective, but they also lead to improved accuracy and user experiences. We will discuss the following techniques:

    • Prompt economization
    • Prompt engineering
    • In-context learning
    • Retrieval augmented generation

    Join us to learn about these smart and easy ways to make your LLM applications more efficient.



    Register now



    Eduardo Alvarez

    Eduardo Alvarez

    Eduardo Alvarez is a Senior AI Solutions Engineer at Intel, specializing in architecting AI/ML solutions, MLOps, and deep learning. With a background in the energy tech startup space, he managed a team focused on delivering SaaS applications for subsurface AI in hydrocarbon and renewable energy exploration and production. Now at Intel, Eduardo collaborates across technical teams, designing impactful solutions highlighting the Intel software and hardware stack's influence on Deep Learning and GenAI workloads. He is the author of Intel’s MLOPs Professional Developer course, where he brings his expertise in the production deployments of AI tools to a broad audience of student and enterprise developers.

    Sancha Huang Norris

    Sancha Huang Norris

    Generative AI Marketing Lead at Intel's Data Center and AI Business Unit

  • On-Demand
    Small and Nimble – the Fast Path to Enterprise GenAI

    Small and Nimble – the Fast Path to Enterprise GenAI

    The fast path to integrate the power of generative AI for your business is not necessarily general purpose, third-party giant models! Smaller LLM models, like those less than 20B parameters, can be a good or better match for your needs. Recent commercially available compact models, such as Llama 2, can address the key attributes that you need– performance, domain adaptation, private data integration, verifiability of results, security, flexibility, accuracy, and cost effectiveness. Join us as we evaluate the effectiveness of open source LLM models, discuss pros and cons, and share methods to build nimble models.

    What you’ll learn about nimble models:

    • Advantages and challenges
    • The ecosystem-driven technology advancement of small, open models
    • Performance compared with top-tier giant models
    • Methods to build one and ways to assess its benefits and value 
    • The full path from a nimble model to a fully adapted model in business


    Register now



    Gadi Singer

    Gadi Singer

    Vice President and Director of Emergent AI Research at Intel Labs, leading the development of third-wave AI capabilities.

    Moshe Berchansky

    Moshe Berchansky

    NLP Deep Learning Researcher at EAI Intel Labs, specializing in Retrieval-Augmented Generation techniques.

    Sancha Huang Norris

    Sancha Huang Norris (moderator)

    Generative AI Marketing Lead at Intel's Data Center and AI Business Unit