S3AI - SLAC Streaming Sandbox for AI
The SLAC EdgeAI team has partnered with a number of private companies and research institutions to demonstrate and benchmark unique hardware in the task of ultrahigh throughput AI inference and conventional algorithms for streaming data processing. Such a sandbox allows for pair-programming with the domain experts who bring the algorithms together with the hardware engineers and architects who can creatively leverage their unique hardware to execute pieces of those algorithms in optimal low latency and high throughput ways.
In training, we have previously demonstrated a model training, using SambaNova RDUs, that fits both the sub-10 minute time frame of between runs of LCLS-II and between shots of DIII-D.
In inference, we achieved a 10 microseconds per inference (100k inference rate) using a Graphcore POD 16 as a round robin. Again, these are timescales for inference that are relevant both for real-time diagnostic information extraction at LCLS-II and DIII-D.
S3AI Facilities
s3ai_image1_resize.jpg
The S3AI contains a dedicated rack of servers as an extension of the S3DF facility.
Ryan Herbst / SLAC National Accelerator Laboratory
S3AI Data Center
The S3AI data center allows for flexible cabling and the addition of processing engines into easily accessible servers.
Ryan Herbst / SLAC National Accelerator Laboratory
Instrumentation Labs
The S3AI servers are co-located with instrumentation labs supporting optical tables for ease of integration with detectors.
Ryan Herbst / SLAC National Accelerator Laboratory
S3AI Partners
A number of partners are joining S3AI from all along the pixel-to-HPC continuum:
Cornami
Abaco
Windriver
GROQ
Fusion Energy Data Ecosystem and Repository (FEDER)