An emerging paradigm in Generative Al is the rise of Agentic Al workflows: Where different Al models act as agents to cooperate, plan and execute to solve complex tasks. These agents can make use of foundation Al models like Large language models (LLMs) and Multimodal LLMs and can do project planning to tool usage and self reflection. Multimodal LLMs are particularly useful when an enterprise has data in modalities other than language such as videos, images, audio recordings, and slides with diagrams, tables, and charts.
Principal AI Research Science Manager at Intel Labs
Vasudev leads the Multimodal Cognitive AI team. His team develops AI systems that can synthesize concept-level understanding from multiple modalities: vision, language, video, etc leveraging large-scale AI clusters powered by Intel Gaudi. His current research interests include self-supervised training at scale for continuous and high dimensional modalities like images, video, and audio; mechanisms to go beyond statistical learning in today’s AI systems by incorporating counterfactual reasoning and principles from causality; and exploring full 3D parallelism (tensor + parallel + data) for training and inferencing large AI models on Intel AI HW (eg: Intel Gaudi2/3-based AI clusters in Intel Dev Cloud).
Vasudev obtained his PhD in Electrical and Computer Engineering from the University of Michigan, Ann Arbor.
LinkedIn Profile: https://www.linkedin.com/in/vasudev-lal-79bb336/
Founding Machine Learning Engineer at Landing AI
AI Sales Enabling Manager in Intel’s Sales and Marketing Group
How can a retail business adopt generative AI and accelerate its growth? Join e.l.f. Beauty and Iterate.ai to learn how a low-code AI platform can quickly deploy large language models to improve operational efficiency and customer engagement. Not a data scientist? Learn how a low-code platform can augment your AI skills for faster innovation.
What you will learn:
Chief Digital Officer, e.l.f. Beauty
Bringing over 25+ years of technology experience from private equity and retail companies, Ekta joined e.l.f. Beauty in 2016 as the Chief Digital Officer. She spearheads the digital transformation at e.l.f. Beauty, covering eCommerce, engineering, data ecosystems, enterprise applications, security, Metaverse Web 3 technologies, and, notably, the development of AI-driven products. Before joining e.l.f., Ekta was the Vice President of Technology at Charming Charlie, where she played a critical role in expanding the retailer from 17 to 120 stores across the nation.
Co-founder and CTO of Iterate.ai
Brian launched his career at Apple, where he spearheaded key projects for iPhone and Intel Mac, securing patents for his work. He founded Avot Media, a video transcoding platform used by Warner Bros, which Smith Micro later acquired. Transitioning to investment, he contributed to 13 investments and acquisitions at Turner Media. Currently, as Co-Founder and CTO of Iterate.ai, Brian leads the development of the Enterprise AI platform Interplay, advancing digital innovation for prominent US retailers and brands.
AI and Data Science Marketing at Intel
LinkedIn Profile: https://www.linkedin.com/in/sanchanorris/
Large Language Models (LLMs) and, more broadly, Generative AI (GenAI), have showcased remarkable versatility across a diverse array of industries and applications. Accenture will share its best practices, considerations, and architectures for constructing a self-managed GenAI platform capable of hosting a myriad of applications.
Learn how:
Drawing from these fundamental principles, we will conclude the session with practical demonstrations showcasing GenAI workloads and discuss future extensions.
Data Scientist
Data Scientist in Accenture's Applied Intelligence Division based out of San Francisco. He holds over 7 years of experience building statistical, Machine Learning, and AI-focused solutions for clients over a wide range of industries, including high-tech, utilities, and health care. Over the past year, his focus has been on deep-diving into the inner workings of various open source Generative AI models with the goal of deploying highly customizable Generative AI solutions for clients.
AI and Data Science Marketing at Intel
LinkedIn Profile: https://www.linkedin.com/in/sanchanorris/
It’s no secret that Large Language Models (LLMs) come with many challenges. Through prompt economization and in-context learning, we can address two significant challenges: model hallucinations and high compute costs.
We will explore creative strategies for optimizing the quality and compute efficiency of LLM applications. These strategies not only make LLM applications more cost-effective, but they also lead to improved accuracy and user experiences. We will discuss the following techniques:
Join us to learn about these smart and easy ways to make your LLM applications more efficient.
Eduardo Alvarez is a Senior AI Solutions Engineer at Intel, specializing in architecting AI/ML solutions, MLOps, and deep learning. With a background in the energy tech startup space, he managed a team focused on delivering SaaS applications for subsurface AI in hydrocarbon and renewable energy exploration and production. Now at Intel, Eduardo collaborates across technical teams, designing impactful solutions highlighting the Intel software and hardware stack's influence on Deep Learning and GenAI workloads. He is the author of Intel’s MLOPs Professional Developer course, where he brings his expertise in the production deployments of AI tools to a broad audience of student and enterprise developers.
Generative AI Marketing Lead at Intel's Data Center and AI Business Unit
The fast path to integrate the power of generative AI for your business is not necessarily general purpose, third-party giant models! Smaller LLM models, like those less than 20B parameters, can be a good or better match for your needs. Recent commercially available compact models, such as Llama 2, can address the key attributes that you need– performance, domain adaptation, private data integration, verifiability of results, security, flexibility, accuracy, and cost effectiveness. Join us as we evaluate the effectiveness of open source LLM models, discuss pros and cons, and share methods to build nimble models.
What you’ll learn about nimble models:
Vice President and Director of Emergent AI Research at Intel Labs, leading the development of third-wave AI capabilities.
NLP Deep Learning Researcher at EAI Intel Labs, specializing in Retrieval-Augmented Generation techniques.
Generative AI Marketing Lead at Intel's Data Center and AI Business Unit