Material may not yet be complete, information may presently be omitted, and certain parts of the content may be subject to radical, rapid alteration. More information pertaining to this may be available on the
talk page.
Lift Wing provides access to ML predictions from a growing number of models available as inference services. Lift Wing provides a shared serving infrastructure that enables standard REST API access to model servers, so you can use standard request formats to issue queries and get results. However, despite this shared serving infrastructure, the available models are each unique. They have different intended uses, input requirements, output schema, and language or wiki project coverage.
The Lift Wing documentation covers how to make standard API calls to model servers, but you must refer to the model card for each specific model to understand its input and output schema, intended uses, and other important implementation details.
The Lift Wing API primarily supports request-driven use cases. For example, a client can get a model prediction for a wiki page edit by making a direct call to a Lift Wing model service. If your use case requires batch data access (periodic transfer of large chunks of data), Lift Wing may not be the best option for you; see meta:Research:Data instead.
Types of inference services available through Lift Wing
The models available on Lift Wing are constantly evolving. As of 2026, the major types of production models fall into several thematic groups, outlined below. Lift Wing "Revscoring" endpoints are meant to replace ORES; see overview of differences between Lift Wing and ORES.
TODO: fix all the link brokenness
Revision evaluation and vandalism detection
TODO: bucket the following
Analyze or describe articles
TODO: verify and update copy for the following, move into content grid
Suggest edits or content changes
TODO: Add a link, tone check? TODO: move into content grid
Reference quality endpoints (TODO should these go in edit quality or article quality?