Heading 1

Heading 2

Heading 3

Heading 4

Heading 5
Heading 6

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

Block quote

Ordered list

  1. Item 1
  2. Item 2
  3. Item 3

Unordered list

Text link

Bold text

Emphasis

Superscript

Subscript

Read more

Speech-To-Text

Mastering multilingual speech-to-text: handle code-switching with AI

The article explains why code-switching makes multilingual speech-to-text harder, especially when speakers switch languages mid-sentence or use accents in noisy environments.

Speech-To-Text

Best Whisper alternatives for 2026: Comparison of top speech-to-text APIs

The article compares the top Whisper alternatives for 2026 across accuracy, latency, pricing, features, and production readiness.

Speech-To-Text

Mastering CRM data enrichment: AI & speech-to-text for smarter leads

The article explains how AI and speech-to-text can enrich CRM records by turning sales calls into structured lead data like names, budgets, timelines, sentiment, and intent signals. It covers pipeline architecture, accuracy testing, compliance, cost planning, CRM integration, and production monitoring.

Mastering CRM data enrichment: AI & speech-to-text for smarter leads

Published on Apr 30, 2026
by Ani Ghazaryan
Mastering CRM data enrichment: AI & speech-to-text for smarter leads

The article explains how AI and speech-to-text can enrich CRM records by turning sales calls into structured lead data like names, budgets, timelines, sentiment, and intent signals. It covers pipeline architecture, accuracy testing, compliance, cost planning, CRM integration, and production monitoring.

TL;DR:

  • Standard firmographic enrichment misses the highest-intent signals, what prospects actually say during calls. Those signals are almost entirely absent from most CRM records.
  • Transcription errors silently corrupt names, phone numbers, and deal-stage signals in CRM records; every downstream scoring and routing system inherits the error.
  • A production-ready audio enrichment pipeline requires async-first STT with diarization (async-only), NER, sentiment analysis, and robustness on accented and multilingual audio.
  • Solaria-1 delivers on average 29% lower WER than alternatives on conversational speech, benchmarked across 8 providers, 7 datasets, and 74+ hours of audio.

Most CRM data enrichment pipelines are architected without accounting for audio-specific failure modes. Silent WER degradation compounds as your sales team expands into new geographies. NER misclassifications map entities to the wrong CRM fields. Diarization misattribution corrupts per-speaker sentiment scores. Compliance gaps in DPAs expose audio data to model retraining. A production-ready audio enrichment pipeline requires async-first STT with NER, speaker diarization (async workflows only), and robustness on real-world audio spanning accented speech and multilingual conversations.

The architecture decisions that matter most are webhook-based integration patterns, NER-to-CRM schema mapping with field-level overwrite logic, and build-versus-buy cost models that include the inaccuracy cost component most teams miss when modeling infrastructure spend. By the end, you'll be equipped to evaluate STT vendors against your own production audio conditions, architect a webhook-based enrichment pipeline that survives real-world failure modes, and model infrastructure costs at realistic scale including the manual review burden generated by poor transcription accuracy.

The guide covers compliance requirements for audio data including GDPR, SOC 2, data residency constraints, and PII redaction configuration, plus the technical requirements for handling multilingual and accented speech robustly in real-world call recordings.

CRM data enrichment: architecture & impact

A lead enrichment pipeline is the automated process of capturing raw lead data, matching it against external data sources, and appending additional fields to each CRM record. The pipeline covers four core stages:

  • Data collection (form submissions, call recordings, list uploads)
  • Cleansing and deduplication
  • Validation against external databases
  • CRM integration with field mapping

Headless lead enrichment decouples the enrichment logic from your CRM UI. It runs as an async pipeline triggered on call completion, processes audio through an STT layer, routes the transcript to an NLP or LLM model, and pushes structured output to your CRM via webhook. Audio-derived data from call recordings supplements firmographic data from static databases to fill gaps where web-scraping misses budget, timeline, and decision-authority signals.

Why poor lead data costs your team

The cost of bad CRM data compounds across three budget lines simultaneously. Poor data quality costs organizations $15 million annually on average, according to Gartner research, compounding across every system that reads from your CRM. A wrong name in a transcript becomes a wrong name in a contact record, which becomes a wrong name in every automated email sequence your system sends. The enrichment layer doesn't just add fields. It sets the accuracy ceiling for everything downstream.

For a 10-person sales team at a $75/hr fully-loaded cost, one hour of admin per day per rep adds up to roughly $187,500 in annual labor that doesn't advance a single deal. That same manual entry process introduces systematic errors: transposed phone numbers, misspelled company names, and missing job titles that break downstream routing logic.

Speed-to-lead is where enriched data pays off most directly. Research from Oldroyd et al. published in HBR (2011) found substantially higher qualification rates for leads contacted within five minutes versus later response windows, and routing logic that depends on stale or missing CRM fields breaks that response window entirely.

AI in the lead enrichment pipeline

AI augments the enrichment pipeline at three points: predictive lead scoring, NLP-based entity extraction from unstructured audio, and anomaly detection for catching data degradation before it compounds downstream.

Predictive scoring uses historical deal outcomes to weight incoming signals. NLP-based NER pulls structured fields like company name, role, and contract value from raw transcripts and maps them to CRM schema. Anomaly detection flags records where extracted data breaks expected patterns: a phone number populating a name field, or an entity confidence score dropping 20% below baseline. Our audio-to-LLM pipeline handles the handoff from structured transcript output to downstream models, so the enrichment logic runs without manual intervention between layers.

Building a production-ready data enrichment pipeline

API-first vs. self-host: time to value

The build-versus-buy decision for STT infrastructure comes down to three concrete comparisons: time to first working integration, ongoing maintenance cost, and accuracy on your actual production audio.

Managed APIs connect via REST or WebSocket in hours. Teams can integrate and test using our getting started documentation. Self-hosting requires GPU provisioning, model version management, inference pipeline configuration, and ongoing maintenance that doesn't scale down with usage. At 10,000 hours/month, self-hosted infrastructure, compute, storage, networking, and engineering overhead, runs $25,000–$50,000/month, based on the cost model in our build an AI note-taker guide, versus roughly $5,000 for a managed API at equivalent volume.

Factor Managed API Self-hosted model
Time to first integration Hours to days Weeks to months
Monthly infra cost Per-hour API rate $25,000–$50,000/month at 10,000 hrs (compute + engineering overhead)
WER on real-world audio Optimized for production Varies significantly in production
Scaling API handles capacity Infrastructure provisioning required

Latency & throughput: real-time vs. batch

For CRM enrichment workflows, async (batch) transcription is the correct default. Async processes the full recording before producing output, which gives the model full context for diarization, language detection, and entity extraction. Async workflows deliver enriched CRM records within minutes of call completion. For post-call CRM field population, async accuracy is worth the negligible wait, and the improvement in diarization and multilingual handling is significant. The Gladia CCaaS use case page walks through async transcription in contact center workflows and the outcomes it enables.

GDPR, SOC 2, & data residency

Compliance requirements for audio data are stricter than for text enrichment because call recordings frequently contain PII, financial data, and health information. Before selecting a vendor, verify these requirements:

  • Data residency: does audio processing stay within the required geographic region?
  • Retraining clause: does the DPA confirm the vendor doesn't use your audio to retrain models?
  • Certifications: SOC 2 Type II, ISO 27001, HIPAA, and GDPR should all be in scope.
  • PII redaction: is it configurable and your responsibility to enable?

On Growth and Enterprise plans, customer audio is never used to retrain our models, no opt-out required, no contract clause to negotiate. On the Starter plan, data can be used for model training by default. The compliance hub covers SOC 2 Type II, ISO 27001, HIPAA, GDPR, and PCI.

Preventing silent data pipeline failures

Silent pipeline failures are the hardest class of bug in a CRM enrichment system. A wrong phone number doesn't throw an exception. It sits in the record and routes to the wrong prospect. The three main causes are transcription errors that survive validation, NER misclassifications that map entities to the wrong CRM fields, and API downtime that drops records rather than queuing them for retry.

Uptime history and incident transparency matter here. Aircall uses Gladia's infrastructure to process transcription for a high volume of calls, which provides a production scale reference point you can verify beyond vendor-reported uptime figures. Check our status page for historical incident data before committing to a production integration.

STT-to-CRM integration differs from packaged voice-to-CRM tools in one critical way: you control the full pipeline, which means you control the schema mapping. A packaged tool makes its own decisions about what to extract and where to map it. An STT API returns structured JSON that you map yourself, giving you exact control over which entity types populate which CRM fields. The Attention platform uses Gladia's API as its core transcription layer to power CRM population, coaching scorecards, and conversation intelligence in production. A technical walkthrough is available in the Attention x Gladia webinar.

Intent signals in sales calls are more specific than anything a form can capture. A prospect who says "we need this in Q3 before our board review" communicates their timeline, their decision trigger, and their internal approval structure in one sentence. NLU-based intent detection matches utterance patterns against a taxonomy of intent classes, scores each against confidence thresholds you define, and feeds the structured output directly into your lead scoring model as a weighted input, running automatically after the call ends with no rep involvement required.

Identifying speakers in sales conversations

Speaker diarization converts a wall of undifferentiated transcript text into an attributed record where each sentence carries a speaker label. When you know which words belong to the prospect versus the rep, you can apply NER selectively to prospect utterances, measure talk-time ratios, and score objection-handling patterns across your team.

Our diarization layer is available in async workflows and is included in the JSON output with word-level timestamps you pass directly to your LLM. The Gladia x pyannoteAI webinar covers what DER metrics to expect on real sales call audio with overlapping speech.

Robust STT for noisy audio & accents

Sales calls are not recorded in anechoic chambers. Background noise, overlapping speech, poor microphone quality, and non-native accents are constants, not edge cases. Most STT vendors test on studio-quality data and evaluate only WER across clean English corpora. Production degradation surfaces the moment a French-accented prospect joins from a coffee shop, and teams discover the gap through support tickets, not monitoring dashboards. Solaria-1 is designed to handle multilingual conversations and accented speech in production environments.

Reliable transcripts for actionable leads

A 10% WER on a 60-word utterance means six words are wrong. If one of those six is a company name, a phone number, or a budget figure, you've corrupted the CRM record before a rep ever sees it. Production transcription accuracy is critical for maintaining CRM data integrity at scale.

Extracting lead insights from audio AI

Once the transcript is clean, the audio intelligence layer converts it into structured fields your CRM can consume. The API returns diarized text, word-level timestamps, detected entities, sentiment scores, and summaries in a single JSON response. The audio intelligence documentation covers the full parameter set and response schema.

Sentiment analysis for qualification scoring

The entity and sentiment arrays map directly to CRM schema fields like company name, contact name, sentiment score, and deal amount.

The sentiment model analyzes each sentence in the transcript and returns sentiment labels with speaker attribution when diarization is enabled, so you can score prospect sentiment and rep sentiment separately. Conflating text-based sentiment with acoustic emotion detection leads to overstated capability claims in your product.

Activation:

  • Add "sentiment_analysis": true to your transcription request alongside the audio_url parameter.

Named entity recognition for contact details

NER is the core extraction layer for CRM field population. The model detects named entities including contact information and key data points, returning each entity with its type, value, and position in the transcript.

In a sales call context, the entities that matter most are company name, personal name, job title, phone number, email address, monetary amounts (contract value, budget), and dates (timeline, renewal date). Each maps directly to a CRM field, and you define the mapping in your webhook handler, giving you full control over which entity types populate which fields and how conflicts resolve when a field already has a value.

Optimizing multilingual enrichment pipelines

The accuracy bar for numerical entities is particularly high in financial and enterprise contexts. A fintech customer running 800 concurrent sessions reported 98.5% numerical accuracy on entities like phone numbers and contract values. A transposed digit in a phone number or contract value escapes manual review more often than a misspelled name, and the downstream cost runs higher.

Solaria-1 covers 100+ languages, including 42 that no other API-level STT vendor supports, among them Tagalog, Bengali, Punjabi, Tamil, Urdu, Persian, and Marathi. For CCaaS platforms with BPO operations in Southeast Asia, South Asia, or Latin America, those 42 languages represent real prospect conversations that would otherwise drop out of the enrichment pipeline entirely. The multilingual meeting transcription guide covers language detection behavior across different audio conditions.

Integrating speech-to-text with your existing CRM

The integration architecture for STT-to-CRM follows a five-step pattern: record the audio, submit it to the STT API via REST, receive the structured JSON response, run webhook logic to map entities to CRM fields, and push the payload to your CRM's write API. Each step has well-defined interfaces and transparent failure modes, which is the opposite of what you get from a packaged integration. The building a meeting assistant guide covers the full async pipeline architecture, including LLM integration and CRM write-back patterns.

REST & WebSocket for CRM STT

For async CRM enrichment workflows, REST provides a straightforward integration path. Submit your audio file or URL with a POST request to the Gladia transcription endpoint, including your API key in the x-gladia-key header. The API returns a job ID, and you configure a webhook URL to receive the structured JSON on completion.

WebSocket connections suit real-time workflows where you need partial transcripts mid-call for live agent assist. For post-call CRM enrichment, REST with webhook delivery is simpler to maintain and provides clearer error handling. The full Gladia documentation covers authentication, endpoint structure, and parameter reference.

Defining transcript-to-CRM data schema

Your webhook handler is where the pipeline translates the JSON output into CRM-specific field values. The key design decision is whether to overwrite existing CRM fields or only fill empty ones. For most pipelines, the right default is filling empty fields, preserving data that reps have entered manually while still capturing what the call adds. Overwrite logic makes sense for fields where recency matters, like last call sentiment or latest stated budget figure. For example, when processing Gladia's entity output for a HubSpot or Salesforce integration, you would extract company_name, contact_name, deal_amount, and sentiment_score from the JSON arrays and map them to the corresponding CRM fields, applying that conditional logic per field type.

Our Python and JavaScript SDKs reduce the boilerplate to a few lines. Native integrations with Twilio, Recall, and other recording infrastructure mean you don't need a custom audio ingestion layer, reducing infrastructure complexity. You can follow the build an AI note-taker guide for a comparable async pipeline pattern.

Production monitoring for an STT-to-CRM pipeline needs three tracked metrics: WER on a held-out sample of real call audio measured against manually verified ground truth, transcript delivery latency from call end to webhook receipt, and entity extraction precision via quarterly spot-check of NER output against verified CRM records.

Model drift is a real risk. As your sales team expands into new geographies, the accent and language distribution of your call recordings shifts, and a model evaluated on your original audio set may degrade on the new distribution. Run WER evaluations against fresh audio samples quarterly, not only at vendor selection. Our async benchmark methodology is open and reproducible, so you can adapt the same evaluation framework to your specific audio conditions.

Build or buy AI: optimize your enrichment spend

The honest cost model for STT infrastructure has three components: the API fee, the DevOps labor to run it, and the cost of inaccuracy, which is the manual review burden generated by poor transcription. The three-component model applies most directly to the build-versus-buy decision covered in the "Resource drain from self-hosting infra" subsection below, where DevOps overhead and inaccuracy risk differ significantly between self-hosted and managed options. When comparing managed API providers against one another, DevOps overhead is roughly equivalent, so the growth forecast table in "Forecasting costs for growth" isolates the API fee variable specifically, since feature bundling and per-hour rate become the meaningful differentiators at that stage of the comparison.

Resource drain from self-hosting infra

Self-hosting eliminates the per-hour API cost but replaces it with fixed costs that don't scale down with usage. At 10,000 hours/month, self-hosted infrastructure: compute, storage, networking, and engineering overhead, runs $25,000–$50,000/month, based on the cost model in our build an AI note-taker guide, versus roughly $5,000 for a managed API at equivalent volume. Add the engineering labor for GPU provisioning, model version management, and incident response, and you're consuming DevOps bandwidth that doesn't ship features.

Self-hosted setups often exceed 10% WER on real-world audio, versus the sub-3% WER teams like Claap achieve in production. That accuracy gap generates manual review queues that add labor cost back into the model and undermine the automation case entirely.

Predictable vs. unpredictable API costs

Add-on pricing is the most common source of invoice surprise in STT infrastructure. A vendor with a low base rate can become expensive once you add speaker diarization, sentiment analysis, entity detection, and summarization as separately metered features. Evaluating total cost requires understanding which features are included at the base rate versus charged as add-ons.

Our Starter and Growth plans include key audio intelligence features at the base rate. There's no add-on math required and no feature gating that changes the cost model mid-contract.

Forecasting costs for growth (1x, 5x, 10x)

Public per-hour pricing matters when forecasting growth. Evaluate vendors based on total cost including all required features, not just the base transcription rate.

Monthly volume Gladia Growth (all-in) AssemblyAI base + add-ons Monthly difference
1,000 hours (1x) $200 ~$300 ~$100
5,000 hours (5x) $1,000 ~$1,500 ~$500
10,000 hours (10x) $2,000 ~$3,000 ~$1,000

The AssemblyAI column reflects their published add-on structure for a like-for-like feature set: $0.15/hr base transcription, plus $0.02/hr sentiment, $0.03/hr summarization, $0.08/hr entity detection, and $0.02/hr speaker identification. Enabling those features brings the effective rate to approximately $0.30/hr.

Minimizing vendor lock-in risk

Lock-in risk in STT infrastructure is real but manageable with disciplined architecture decisions. Standard REST and WebSocket interfaces mean your integration logic isn't tied to proprietary SDKs. Keeping your schema mapping layer separate from your STT API calls means you can swap the underlying transcription vendor without rewriting your CRM integration logic.

Our audio-to-LLM pipeline supports bring-your-own-model configuration, so your LLM-based enrichment logic isn't locked to our integrated model options.

Ensuring STT accuracy with real-world audio

Vendor-provided benchmarks are a starting point, not a purchase decision. The only accuracy number that matters for your pipeline is the WER on your actual call recordings, under your actual conditions: your reps' accents, your customers' accents, your recording infrastructure's noise floor, and your language distribution.

Evaluating your lead enrichment AI

Take a stratified sample of representative call recordings spanning your language and accent distribution. Submit them to each vendor under evaluation. Measure WER against a manually verified ground truth transcript, and measure NER precision on the entity types you care about most (company name, phone number, monetary amounts). Document the conditions where the model degrades: cross-talk, heavy accent, low-bandwidth audio, and code-switched conversations.

If your sales org includes bilingual markets, mid-conversation language switching is a specific failure mode to test for. Our code-switching detection automatically detects language changes across all 100+ supported languages in both real-time and async modes. Without it, the model defaults to its primary language and garbles the second, or drops the segment entirely and returns no output. The contact center code-switching guide covers this failure mode in detail.

Benchmark data & testing conditions

Solaria-1 achieves on average 29% lower WER than alternatives on conversational speech and on average 3x lower DER vs. alternatives, benchmarked across 8 providers, 7 datasets, and 74+ hours of audio. The benchmark covers conversational speech under real-world conditions, not only clean read-aloud audio, which is the condition that actually predicts production performance in a CRM enrichment pipeline.

Use the Gladia playground walkthrough to test transcription output on your own audio files before building the integration.

Common pitfalls in lead data enrichment

The failures we see most often in audio-based CRM enrichment pipelines are predictable and avoidable. They fall into three categories.

Benchmark WER for live STT systems

The most dangerous failure mode is staging metrics that look clean while production degrades silently. Test sets built from your early call recordings don't represent the audio conditions you'll encounter six months later when your sales team expands into new geographies or switches recording infrastructure. The fix is continuous WER monitoring on a rolling sample of production calls, not a one-time evaluation at vendor selection. The meeting transcription pitfalls guide covers the most common implementation mistakes in detail.

GDPR compliance for data enrichment pipelines

The most common compliance gap is vendors that use customer audio for model retraining by default, with opt-out buried in a DPA appendix. If your sales calls contain PII, financial data, or health information, you need explicit written confirmation that your audio isn't feeding model training.

On Growth and Enterprise plans, customer audio is never used to retrain our models, no opt-out required, no contract clause to negotiate. On the Starter plan, data can be used for model training by default. PII redaction capabilities are available in the API. Review the compliance hub and verify data residency region against your customer contracts before the DPA is signed.

Optimizing real-time enrichment latency

For specific live-assist use cases like surfacing a competitor mention mid-call or triggering a real-time pricing alert, we support real-time transcription at ~300ms final transcript latency for partial transcripts. That's a secondary capability for CRM workflows, not the core pipeline for post-call field population.

CRM data enrichment pipeline checklist

Use this checklist before shipping your audio enrichment pipeline to production:

Data capture

  • Call recordings route to your STT API on call completion
  • Audio format is confirmed with your STT provider
  • Language and accent distribution of call recordings is documented

STT configuration

  • Diarization is enabled in async mode
  • NER is enabled and entity types are mapped to CRM fields
  • Sentiment analysis is enabled and mapped to a qualification score field
  • Code-switching is enabled if your pipeline includes multilingual calls
  • Custom vocabulary is configured with domain-specific terms
  • PII redaction is configured if required

Compliance

  • Vendor DPA reviewed and data residency region confirmed against customer contracts
  • Retraining clause confirmed in writing
  • SOC 2 Type II, ISO 27001, HIPAA, and GDPR certifications verified

Schema and integration

  • Webhook URL is configured to receive transcription results
  • CRM field mapping is defined in your webhook handler
  • Fill-empty versus overwrite logic is explicitly defined per field type
  • Error handling and retry logic is implemented for API failures

Monitoring

  • WER monitoring is set up on a rolling sample of production calls
  • Transcript delivery latency is tracked from call end to webhook receipt
  • NER precision is scheduled for quarterly manual spot-check
  • Alert thresholds are set for WER degradation above your established baseline

Gladia offers 10 hours of free audio transcription per month. Get started with our API and test against your own call recordings before committing to a plan.

FAQs

What is a lead enrichment pipeline?

A lead enrichment pipeline is the automated process of capturing raw lead data from sources like form submissions, call recordings, and CRM imports, matching it against external databases, and appending additional structured fields to each record. The output is a fuller prospect profile including contact details, firmographics, intent signals, and behavioral data, without manual rep entry.

What is the cost per hour for async STT with full features enabled?

Gladia uses per-hour pricing starting at $0.20/hr on the Growth plan, with diarization, translation, sentiment analysis, NER, summarization, and code-switching included at the base rate.

Does Gladia use customer audio to train its models?

On Growth and Enterprise plans, customer audio is never used to retrain our models, no opt-out required, no contract clause to negotiate. On the Starter plan, data can be used for model training by default. This distinction applies per tier and is documented in the DPA and compliance hub.

Is speaker diarization available in real-time transcription workflows?

Gladia's production-grade speaker diarization is available in async (batch) workflows only, powered by pyannoteAI's Precision-2 model. Speaker labels are assigned at the word level in the JSON response. For real-time scenarios, speaker attribution can be handled in post-processing for higher accuracy.

How do you prevent CRM data corruption from transcription errors?

Evaluate vendors on real-world audio samples that match your production conditions, including accented speech, background noise, and multilingual calls, not only on clean benchmark datasets. Enable custom vocabulary for domain-specific terms, and monitor WER on a rolling sample of production calls quarterly. Solaria-1 is designed for production environments with robust accuracy on real-world audio, which translates directly to fewer corrupted entities in your CRM field mapping.

Key terms glossary

Word error rate (WER): The percentage of words in a transcript that differ from the ground truth, calculated as substitutions plus deletions plus insertions divided by total reference words. A 10% WER on a 100-word utterance means 10 words are wrong and every entity in those positions is potentially corrupted in your CRM.

Diarization error rate (DER): A measure of how accurately an STT system attributes words to the correct speaker, expressed as the percentage of audio time incorrectly labeled. Lower DER means cleaner speaker attribution in multi-party call transcripts and more reliable per-speaker sentiment and intent scoring.

Code-switching: The phenomenon where a speaker alternates between two or more languages within a single conversation. Without proper code-switching detection, STT models may produce degraded output when language changes occur mid-conversation. Solaria-1 detects language changes automatically and maintains transcript continuity across 100+ supported languages.

Headless lead enrichment: An enrichment architecture that runs as an async background pipeline without a CRM UI dependency. It triggers on call completion or file upload, processes audio through STT and NLP layers, and pushes structured data to the CRM via webhook, with the enrichment logic decoupled from any specific CRM vendor.

Named entity recognition (NER): An NLP technique that identifies and classifies named entities in transcript text, such as person names, company names, phone numbers, and monetary values, into predefined categories. In CRM enrichment, NER output maps directly to CRM field values and sets the data quality ceiling for every downstream scoring and routing system that reads those fields.

Contact us

280
Your request has been registered
A problem occurred while submitting the form.

Read more