Data and AI need to be unified: the best AI applications require massive amounts of constantly updated training data to build state-of-the-art models. So far. Apache Spark is the only unified analytics engine that combines large-scale data processing with state-of-the-art machine learning and AI algorithms
The sessions and training at this conference will cover data engineering and data science content, along with best practices for productionizing AI: keeping training data fresh with stream processing, quality monitoring, testing, and serving models at massive scale. The conference will also include deep-dive sessions on popular software frameworks—e.g., TensorFlow, SciKit-Learn, Keras, PyTorch, DeepLearning4J, BigDL, and deep learning pipelines.
Combining Spark + AI topics, this conference is a unique “one-stop shop” for developers, data scientists, and tech executives seeking to apply the best tools in data and AI to build innovative products. Join more than 5,000 engineers, data scientists, AI experts, researchers, and business professionals for three days of in-depth learning and networking.