AI Glossary

Cloud AI

AI services and infrastructure provided through cloud platforms, enabling organizations to train, deploy, and scale AI models without owning specialized hardware.

Major Platforms

AWS: SageMaker, Bedrock (model API). Google Cloud: Vertex AI, TPU access. Azure: Azure ML, OpenAI Service. Specialized: Lambda Labs, CoreWeave, RunPod for GPU rental.

Services

Pre-trained model APIs (vision, language, speech). Managed training infrastructure. AutoML for non-experts. MLOps platforms for production deployment. Vector databases as a service.

Considerations

Cost management (GPU instances are expensive). Data privacy and residency requirements. Vendor lock-in concerns. Latency for real-time applications. On-premise vs. cloud trade-offs for sensitive workloads.

← Back to AI Glossary

Last updated: March 5, 2026