Skip to main content
November 2025
Standardized model versioning. See the model catalog for the latest models to use in your pipelines and search configurations.Released mte-base-sparse-v1 for sparse embeddings that preserve keyword and code fidelity in hybrid search scenarios. Use mte-base-sparse-v1 in VECTORIZER processors in ingestion pipelines. Renamed mte-base-knowledge to mte-base-knowledge-v1 and mte-base-knowledge-rank to mte-base-knowledge-rank-v1. Use these IDs in pipelines (VECTORIZER) and search ranking configs going forward. Released mte-base-v1 for general purpose medical text embeddings.
July 2025
Renamed embedder_medical_journals_qa to mte-base-knowledge. Renamed ranker_medical_journals_qa to mte-base-knowledge-rank.mte-base-knowledge shows improved performance on CURE (~7.5 points on average over previous release).
November 2024
Initial ReleaseAbility to use the embedder_medical_journals_qa model ID in ingestion pipelines, specialized in generating embeddings for Knowledge Search. Scores ~2 points higher than gte-multilingual-base on CURE open benchmark.Ability to use the ranker_medical_journals_qa model id for semantic ranking. See the Ranking API for more details.