Data and AI need to be unified: the best AI applications require massive amounts of constantly updated training data to build state-of-the-art models. Apache Spark has been the only unified analytics engine that combines large-scale data processing with the execution of state-of-the-art machine learning and AI algorithms.
The sessions and training at this conference will cover great data engineering and data science content along with best practices for productionizing AI: keeping training data fresh with stream processing, monitoring quality, testing, and serving models at massive scale. We will also have deep dive sessions on popular software frameworks—e.g., TensorFlow, SciKit-Learn, Keras, PyTorch, DeepLearning4J, BigDL, and Deep Learning Pipelines.
With Spark + AI topics together, this conference is the unique “one-stop shop” for developers, data scientists, and tech-executives to learn how to practically apply the best tools in data and AI to build innovative products. So join more than 4,000 engineers, data scientists, AI experts, researchers, and business professionals for three days of in-depth learning and networking.