Why Attend
Enterprises can lose track of ownership costs and miss out on their total available compute due to the ‘Memory Wall’ bottleneck and other systems design inefficiencies. MemCon 2024 breaks down these ‘Memory Wall’ and system design issues by putting together internal teams at organizations working with data-intensive workloads, AI vendors, and technology vendors, who are creating next-generation systems and components. MemCon 2024 is a one-stop-shop for emerging technologies in the memory and storage domain, and a hub for efficient data movement and management.
For technology vendors and system designers, MemCon 2024 is a user-centric deep-dive and a recognized launchpad for emerging technologies such as CXL and emerging memories. MemCon 2024 gives technology vendors direct access to meet data practitioners, who are grappling with designing efficient AI/ML systems.
For technology end-users, such as enterprise teams working on large language models (LLMs), or AI vendors that are designing state-of-the-art models, MemCon 2024 provides a platform to meet peers trying to overcome system data movement and management challenges. The conference provides real-world feedback for technology vendors and allows AI vendors and technology end-users to be at the forefront of system design innovation.
What’s in it for you?
- Discover, discuss, and debate data management roadblocks such as data complexities and limitations of traditional system design, enabling you to make critical business and technical-buying decisions about your systems infrastructure in the future.
- Make sense of ownership costs through understanding the intricacies of data management economics and reduce potential trade-offs by recognizing potential workload impacts on system design. Helping inform your understanding of AI/ML’s impact on infrastructure costs.
- Future-proof your data management operations through industry-leading insights from hardware and software architects. Be the first to see new products entering the market that will level up AI/ML deployments and unlock new systems innovation. Learn from the leading startups and connect with future leaders within the space.
- Join a thriving network of systems and AI/ML experts - whether accessing new pools of customers, connecting with potential technology partners, or tackling data issues with peers – the MemCon community is here to help build brand awareness, thought-leadership within the memory and systems markets and help you connect with enterprise-level business leaders.
Featured 2024 Speakers Included
Zaid Kahn
Zaid is currently GM in Cloud Hardware Infrastructure Engineering where he leads a team focusing on advanced architecture and engineering efforts for AI. He is passionate about building balanced teams of artists and soldiers that solve incredibly difficult problems at scale.
Prior to Microsoft Zaid was head of infrastructure engineering at LinkedIn responsible for all aspects of engineering for Datacenters, Compute, Networking, Storage and Hardware. He also lead several software development teams spanning from BMC, network operating systems, server and network fleet automation to SDN efforts inside the datacenter and global backbone including edge. He introduced the concept of disaggregation inside LinkedIn and pioneered JDM with multiple vendors through key initiatives like OpenSwitch, Open19 essentially controlling destiny for hardware development at LinkedIn. During his 9 year tenure at LinkedIn his team scaled network and systems 150X, members from 50M to 675M, and hiring someone every 7 seconds on the LinkedIn Platform.
Prior to LinkedIn Zaid was Network Architect at WebEx responsible for building the MediaTone network and later I built a startup that built a pattern recognition security chip using NPU/FPGA. Zaid holds several patents in networking and SDN and is also a recognized industry leader. He previously served as a board member of the Open19 Foundation and San Francisco chapter of Internet Society. Currently he serves on DE-CIX and Pensando advisory boards.
Manoj Wadekar
Helen Byrne
Helen leads the Solution Architects team at Graphcore, helping innovators build their AI solutions using Graphcore’s Intelligence Processing Units (IPUs). She has been at Graphcore for more than 5 years, previously leading AI Field Engineering and working in AI Research, working on problems in Distributed Machine Learning. Before landing in the technology industry, she worked in Investment Banking. Her background is in Mathematics and she has a MSc in Artificial Intelligence.
Tejas Chopra
Tejas Chopra is a Sr. Engineer at Netflix working on Machine Learning Platform for Netflix Studios and a Founder at GoEB1 which is the world’s first and only thought leadership platform for immigrants.Tejas is a recipient of the prestigious EB1A (Einstein) visa in US. Tejas is a Tech 40 under 40 Award winner, a TEDx speaker, a Senior IEEE Member, an ACM member, and has spoken at conferences and panels on Cloud Computing, Blockchain, Software Development and Engineering Leadership.Tejas has been awarded the ‘International Achievers Award, 2023’ by the Indian Achievers’ Forum. He is an Adjunct Professor for Software Development at University of Advancing Technology, Arizona, an Angel investor and a Startup Advisor to startups like Nillion. He is also a member of the Advisory Board for Flash Memory Summit.Tejas’ experience has been in companies like Box, Apple, Samsung, Cadence, and Datrium. Tejas holds a Masters Degree in ECE from Carnegie Mellon University, Pittsburgh.
Puja Das
Dr. Puja Das, leads the Personalization team at Warner Brothers Discovery (WBD) which includes offerings on Max, HBO, Discovery+ and many more.
Prior to WBD, she led a team of Applied ML researchers at Apple, who focused on building large scale recommendation systems to serve personalized content on the App Store, Arcade and Apple Books. Her areas of expertise include user modeling, content modeling, recommendation systems, multi-task learning, sequential learning and online convex optimization. She also led the Ads prediction team at Twitter (now X), where she focused on relevance modeling to improve App Ads personalization and monetization across all of Twitter surfaces.
She obtained her Ph.D from University of Minnesota in Machine Learning, where the focus of her dissertation was online learning algorithms, which work on streaming data. Her dissertation was the recipient of the prestigious IBM Ph D. Fellowship Award.
She is active in the research community and part of the program committee at ML and recommendation system conferences. Shas mentored several undergrad and grad students and participated in various round table discussions through Grace Hopper Conference, Women in Machine Learning Program colocated with NeurIPS, AAAI and Computing Research Association- Women’s chapter.
Jin-Hyeok Choi
Jin-Hyeok Choi leads Device Solution’s R&D – Memory division, which develops new memory technologies and enables memory products.
Jin-Hyeok joined Samsung Electronics in 2003 as a SoC design engineer, working on the development of mobile storage. From 2012 to 2019, he was in charge of the development team for controllers, a core component of SoCs based on NAND Flash. He developed and commercialized the world's first eMMC and UFS products, as well as various controllers for SATA/SAS/NVMe SSDs. He also developed the first-ever enterprise premium SSD with high endurance VNAND and has contributed significantly to the expansion of the storage market.
Jin-Hyeok received his B.S., M.S., and Ph. D. degrees in Electronics Engineering from Seoul National University in 1989, 1991, and 1996, respectively. He also studied low-power circuits at the University of Tokyo's Institute of Industrial Science.