Your SlideShare is downloading. ×
Fundamentals & recent advances in HMM-based speech synthesis
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Fundamentals & recent advances in HMM-based speech synthesis

7,216

Published on

FALA 2011 Keynote, November 2010 …

FALA 2011 Keynote, November 2010

Statistical parametric speech synthesis based on HMMs has grown in popularity over the last years. In this talk, its system architecture is outlined, and then basic techniques used in the system, including algorithms for speech parameter generation from HMM, are described with simple examples. Relation to the unit selection approach and recent improvements are summarized. Techniques developed for increasing the flexibility and improving the speech quality are also reviewed.

Published in: Technology
0 Comments
3 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
7,216
On Slideshare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
Downloads
170
Comments
0
Likes
3
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Fundamentals & recent advances in HMM-based speech synthesis Heiga ZEN Toshiba Research Europe Ltd. Cambridge Research Laboratory FALA2010, Vigo, Spain November 10th, 2010
  • 2.
    • Text-to-speech synthesis (TTS)
      • Text (seq of discrete symbols)  Speech (continuous time series)
    • Automatic speech recognition (ASR)
      • Speech (continuous time series)  Text (seq of discrete symbols)
    • Machine Translation (MT)
      • Text (seq of discrete symbols)  Text (seq of discrete symbols)
    Text-to-speech as a mapping problem Dobré ráno  Good morning Good morning   Good morning
  • 3. speech Speech production process text (concept) air flow Sound source voiced: pulse unvoiced: noise frequency transfer characteristics magnitude start--end fundamental frequency modulation of carrier wave by speech information fundamental freq. voiced/unvoiced freq. trans. char.
  • 4. Flow of text-to-speech synthesis TEXT Text analysis SYNTHESIZED SPEECH Speech synthesis This talk focuses on speech synthesis part of TTS Grapheme-to-Phoneme Part-of-Speech tagging Text normalization Pause prediction Prosody prediction Waveform generation
  • 5.
    • Rule-based, formant synthesis (~’90s)
      • Based on parametric representation of speech
      • Hand-crafted rules to control phonetic unit
        • DECtalk (or KlattTalk / MITTalk) [Klatt;‘82]
    Speech synthesis methods (1) Block diagram of KlattTalk
  • 6.
    • Corpus-based, concatenative synthesis (’90s~)
      • Concatenate small speech units (e.g., phone) from a database
      • Large data + automatic learning  High-quality synthetic voices
      • Single inventory; diphone synthesis [Moullnes;‘90]
      • Multiple inventory; unit selection synthesis [Sagisaka;‘92 , Black;‘96]
    Speech synthesis methods (2)
  • 7.
    • Corpus-based, statistical parametric synthesis (mid ’90s~)
      • Large data + automatic training
      •  Automatic voice building
      • Source-filter model + statistical modeling
      •  Flexible to change its voice characteristics
      • Hidden Markov models (HMMs) as its statistical acoustic model
      •  HMM-based speech synthesis (HTS) [Yoshimura;‘02]
    Speech synthesis methods (3) Parameter generation Model training Feature extraction Waveform generation
  • 8.
    • This talk gives the basic implementations & some recent work in HMM-based speech synthesis research
    Popularity of HMM-based speech synthesis
  • 9. Outline
    • Fundamentals of HMM-based speech synthesis
      • Mathematical formulation
      • Implementation of individual components
      • Examples demonstrating its flexibility
    • Recent work
      • Quality improvements
        • Vocoding
        • Acoustic modeling
        • Oversmoothing
      • Other challenging research topics
    • Conclusions
  • 10. Mathematical formulation
    • Statistical parametric speech synthesis
      • Training
      • Synthesis
      • Using HMM as generative model
      •  HMM-based speech synthesis (HTS) [Yoshimura;’02]
  • 11. Excitation generation Synthesis Filter TEXT Text analysis SYNTHESIZED SPEECH Parameter generation from HMMs Context-dependent HMMs & state duration models Labels Excitation parameters Excitation Spectral parameters Labels Training part Synthesis part Excitation Parameter extraction SPEECH DATABASE Spectral Parameter Extraction Spectral parameters Excitation parameters Speech signal HMM-based speech synthesis system (HTS) Training HMMs
  • 12. SPEECH DATABASE Excitation Parameter extraction Spectral Parameter Extraction Excitation generation Synthesis Filter TEXT Text analysis SYNTHESIZED SPEECH Parameter generation from HMMs Context-dependent HMMs & state duration models Labels Excitation parameters Excitation Spectral parameters Spectral parameters Excitation parameters Labels Speech signal Training part Synthesis part HMM-based speech synthesis system (HTS) Training HMMs
  • 13. Excitation generation Synthesis Filter TEXT Text analysis SYNTHESIZED SPEECH Parameter generation from HMMs Context-dependent HMMs & state duration models Labels Excitation parameters Excitation Spectral parameters Labels Training part Synthesis part Excitation Parameter extraction SPEECH DATABASE Spectral Parameter Extraction Spectral parameters Excitation parameters Speech signal HMM-based speech synthesis system (HTS) Training HMMs
  • 14. air flow frequency transfer characteristics magnitude start--end fundamental frequency modulation of carrier wave by speech information speech Sound source voiced: pulse unvoiced: noise Speech production process
  • 15. Divide speech into frames Speech is a non-stationary signal … but can be assumed to be quasi-stationary  Divide speech into short-time frames (e.g., 5ms shift, 25ms length)
  • 16. linear time-invariant system excitation pulse train (voiced) white noise (unvoiced) speech Excitation (source) part Spectral (filter) part Fourier transform Source-filter model
  • 17.
    • Parametric models speech spectrum
    Autoregressive (AR) model Exponential (EX) model ML estimation of spectral model parameters : AR model  Linear prediction (LP) [Itakura;’70] : EX model  ML-based cepstral analysis Spectral (filter) model
  • 18. Excitation (source) model
    • Excitation model: pulse/noise excitation
      • Voiced (periodic)  pulse trains
      • Unvoiced (aperiodic)  white noise
    • Excitation model parameters
      • V/UV decision
      • V  fundamental frequency (F0):
    excitation pulse train white noise
  • 19. Speech samples Natural speech Reconstructed speech from extracted parameters (cepstral coefficients & F0 with V/UV decisions) Quality degrades, but main characteristics are preserved
  • 20. SPEECH DATABASE Excitation Parameter extraction Spectral Parameter Extraction Excitation generation Synthesis Filter TEXT Text analysis SYNTHESIZED SPEECH Parameter generation from HMMs Context-dependent HMMs & state duration models Labels Excitation parameters Excitation Spectral parameters Speech signal Training part Synthesis part Spectral parameters Excitation parameters Labels HMM-based speech synthesis system (HTS) Training HMMs
  • 21. Structure of state-output (observation) vector Spectrum part Excitation part Spectral parameters (e.g., cepstrum, LSPs) log F0 with V/UV
  • 22. Dynamic features
  • 23. HMM-based modeling Observation sequence State sequence Sentence HMM Transcription sil a i sil
  • 24. Spectrum (cepstrum or LSP, & dynamic features) Excitation (log F0 & dynamic features) Stream 1 2 3 4 Multi-stream HMM structure
  • 25. Time Log Frequency Unable to model by continuous or discrete distribution Observation of F0
  • 26. unvoiced voiced unvoiced voiced unvoiced voiced voiced/unvoiced weights Multi-space probability distribution (MSD)
  • 27. Voiced Unvoiced Voiced Unvoiced Voiced Unvoiced Log F0 Spectral params Single Gaussian MSD (Gaussian & discrete) MSD (Gaussian & discrete) MSD (Gaussian & discrete) Stream 1 Stream 2,3,4 Structure of state-output distributions
  • 28. Proto-type definition Voiced Unvoiced Voiced Unvoiced Voiced Unvoiced Log F0 MCEP Stream 1 Stream 2,3,4 ~o<VecSize> 78 <USER><DIAGC> <StreamInfo>4 75 1 1 1<MSDInfo> 4 0 1 1 1 <BeginHMM><NumStates>7 <State>2 <Stream> 1 <Mean> 75 0.0.0.0.0.0.0.0.0 … <Variance> 75 0.0.0.0.0.0.0.0.0 … <Stream> 2 <NumMixes> 2 <Mixture> 1 0.5000 <Mean> 1 0.0 <Variance> 1 1.0 <Mixture> 2 0.5000 <Mean> 0 <Variance> 0 <Stream> 3
  • 29. data & labels Training process Compute variance floor Initialize CI-HMMs by segmental k-means Reestimate CI-HMMs by EM algorithm Copy CI-HMMs to CD-HMMs monophone (context-independent, CI) Reestimate CD-HMMs by EM algorithm Decision tree-based clustering Reestimate CD-HMMs by EM algorithm Untie parameter tying structure Estimate CD-dur Models from FB stats Decision tree-based clustering Estimated dur models Estimated HMMs fullcontext (context-dependent, CD)
  • 30. HMM-based modeling Observation sequence State sequence Sentence HMM Transcription sil sil-a+i a-i+sil sil
  • 31.
    • Phoneme
      • current phoneme
      • {preceding, succeeding} two phonemes
    • Syllable
      • # of phonemes at {preceding, current, succeeding} syllable
      • {accent, stress} of {preceding, current, succeeding} syllable
      • Position of current syllable in current word
      • # of {preceding, succeeding} {accented, stressed} syllable in current phrase
      • # of syllables {from previous, to next} {accented, stressed} syllable
      • Vowel within current syllable
    • Word
      • Part of speech of {preceding, current, succeeding} word
      • # of syllables in {preceding, current, succeeding} word
      • Position of current word in current phrase
      • # of {preceding, succeeding} content words in current phrase
      • # of words {from previous, to next} content word
    • Phrase
      • # of syllables in {preceding, current, succeeding} phrase
    • … ..
    Huge # of combinations ⇒ Difficult to have all possible models Context-dependent modeling
  • 32. data & labels Training process Compute variance floor Initialize CI-HMMs by segmental k-means Reestimate CI-HMMs by EM algorithm Copy CI-HMMs to CD-HMMs monophone (context-independent, CI) Reestimate CD-HMMs by EM algorithm Decision tree-based clustering Reestimate CD-HMMs by EM algorithm Untie parameter tying structure Estimate CD-dur Models from FB stats Decision tree-based clustering Estimated dur models Estimated HMMs fullcontext (context-dependent, CD)
  • 33. k-a+b/A:… t-e+h/A:… … … Decision tree-based context clustering [Odell;’95] … w-a+t/A:… w-i+sil/A:… w-o+sh/A:… gy-e+sil/A:… gy-a+pau/A:… g-u+pau/A:… … … leaf nodes synthesized states yes yes yes yes yes no no no no no R=silence? L=“gy”? C=voiced? L=“w”? R=silence?
  • 34.
    • Spectrum & excitation have different context dependency
    •  Build decision trees separately
    Decision trees for mel-cepstrum Decision trees for F0 Stream-dependent clustering
  • 35. data & labels Training process Compute variance floor Initialize CI-HMMs by segmental k-means Reestimate CI-HMMs by EM algorithm Copy CI-HMMs to CD-HMMs monophone (context-independent, CI) Reestimate CD-HMMs by EM algorithm Decision tree-based clustering Reestimate CD-HMMs by EM algorithm Untie parameter tying structure Estimate CD-dur Models from FB stats Decision tree-based clustering Estimated dur models Estimated HMMs fullcontext (context-dependent, CD)
  • 36. 1 2 3 4 5 6 7 8 Estimation of state duration models [Yoshimura;’98]
  • 37. Decision trees for mel-cepstrum Decision trees for F0 HMM State duration model Decision tree for state dur. models Stream-dependent clustering
  • 38. data & labels Training process Compute variance floor Initialize CI-HMMs by segmental k-means Reestimate CI-HMMs by EM algorithm Copy CI-HMMs to CD-HMMs monophone (context-independent, CI) Reestimate CD-HMMs by EM algorithm Decision tree-based clustering Reestimate CD-HMMs by EM algorithm Untie parameter tying structure Estimate CD-dur Models from FB stats Decision tree-based clustering Estimated dur models Estimated HMMs fullcontext (context-dependent, CD)
  • 39. SPEECH DATABASE Excitation Parameter extraction Spectral Parameter Extraction Excitation generation Synthesis Filter SYNTHESIZED SPEECH Excitation Speech signal Training part Synthesis part Spectral parameters Excitation parameters Labels TEXT Text analysis Parameter generation from HMMs Context-dependent HMMs & state duration models Labels Excitation parameters Spectral parameters HMM-based speech synthesis system (HTS) Training HMMs
  • 40. Composition of sentence HMM for given text TEXT Text analysis context-dependent label sequence G2P POS tagging Text normalization Pause prediction sentence HMM given labels This sentence HMM gives
  • 41.  Speech parameter generation algorithm
  • 42. Observation sequence State sequence Determine state sequence via determining state durations Determination of state sequence (1) State duration
  • 43. Determination of state sequence (2)
  • 44. Geometric Gaussian 0.0 Determination of state sequence (3) 1 2 3 4 5 6 7 8 State duration 0.1 0.2 0.3 0.4 0.5 State-duration prob.
  • 45.  Speech parameter generation algorithm
  • 46. Mean Variance   step-wise, mean values Without dynamic features
  • 47. Speech param. vectors includes both static & dyn. feats. The relationship between & can be arranged as Integration of dynamic features
  • 48. Speech parameter generation algorithm 
  • 49. Solution
  • 50. Mean Variance Dynamic Static Generated speech parameter trajectory
  • 51. 5 Generated spectra w/o dynamic features w/ dynamic features 0 1 2 3 4 5 (kHz) 0 1 2 3 4 (kHz) a i sil sil
  • 52. SPEECH DATABASE Excitation Parameter extraction Spectral Parameter Extraction TEXT Text analysis Parameter generation from HMMs Context-dependent HMMs & state duration models Labels Spectral parameters Excitation parameters Labels Speech signal Training part Synthesis part Excitation generation Synthesis Filter SYNTHESIZED SPEECH Excitation parameters Excitation Spectral parameters HMM-based speech synthesis system (HTS) Training HMMs
  • 53. linear time-invariant system excitation pulse train white noise synthesized speech Generated excitation parameter (log F0 with V/UV) Generated spectral parameter (cepstrum, LSP) Source-filter model filtering
  • 54. Spectral representation & corresponding filter
  • 55. Speech samples w/o dynamic features w/ dynamic features Use of dynamic features can reduce discontinuity
  • 56. Outline
    • Fundamentals of HMM-based speech synthesis
      • Mathematical formulation
      • Implementation of individual components
      • Examples demonstrating its flexibility
    • Recent works
      • Quality improvements
        • Vocoding
        • Acoustic modeling
        • Oversmoothing
      • Other challenging research topics
    • Conclusions
  • 57.
    • Adaptation/adaptive training of HMMs
      • Originally developed in ASR, but works very well in TTS
      • Average voice model (AVM)-based speech synthesis [Yamagishi;’06]
      • Require small amount of target speaker/speaking style data
      •  More efficient creation of new voices
    Training speakers Adaptation (mimic voice) Adaptive Training Average-voice model Adaptation Target speakers
  • 58.
    • Speaker adaptation
      • VIP voice
      • Child voice
    • Style (expressiveness) adaptation (in Japanese)
      • Joyful
      • Sad
      • Rough
    • Difficult to collect large amount of data
      •  Adaptation framework gives a way to build TTS with small database
    Adaptation demo Some samples are from http://homepages.inf.ed.ac.uk/jyamagis/Demo-html/demo.html
  • 59.
    • Interpolate parameters among representive HMM sets
      • Can obtain new voices even when no adaptation data is available
      • Gradually change speakers & speaking styles [Yoshimura;’97, Tachibana;’05]
    Interpolation (mixing voice)
  • 60.
    • Speaker interpolation (in Japanese)
      • Male & Female
    • Style interpolation
      • Neutral  Angry
      • Neutral  Happy
    Interpolation demo Samples are from http://www.sp.nitech.ac.jp/ & http://homepages.inf.ed.ac.uk/jyamagis/Demo-html/demo.html
  • 61.
    • Multiple-regressions HMMs [Fujinaga;’01]
      • Assign intuitive meaning to control synthetic voice [Nose;’07]
    Angry Happy Sad Emotion vector Emotion vector Multiple regression (control voice)
  • 62.
    • Style-control
    Angry Happy Sad Multiple regression demo Samples are from http://homepages.inf.ed.ac.uk/jyamagis/Demo-html/demo.html
  • 63. Outline
    • Fundamentals of HMM-based speech synthesis
      • Mathematical formulation
      • Implementation of individual components
      • Examples demonstrating its flexibility
    • Recent works
      • Quality improvements
        • Vocoding
        • Acoustic modeling
        • Oversmoothing
      • Other challenging research topics
    • Conclusions
  • 64. Drawbacks of HTS
    • Quality of synthesized speech
      • Buzzy
      • Flat
      • Muffled
    • Three major factors degrade the quality
      • Poor vocoding
      •  how to parameterize speech?
      • Inaccurate acoustic modeling
      •  how to model extracted speech parameter trajectories?
      • Over-smoothing
      •  how to recover generated speech parameter trajectories?
    • Recent work has drastically improved the quality of HTS
  • 65. pulse train white nose excitation Unvoiced Voiced Problem in vocoding
    • Binary pulse or noise excitation
      • Voiced (periodic)  pulse trains
      • Unvoiced (aperiodic)  white noise
      • Difficult to model mixture of voiced & unvoiced sounds (e.g., voiced fricatives)
    • Results in buzziness in synthesized speech
  • 66.
    • Limitations of HMMs for modeling speech parameters
      • Piece-wise constant statistics
        •  Statistics do not vary within an HMM state
    /a/ /i/ /sil/ /sil/ 2.2 0 -1.2 0.4 0 -0.5 mean variance Problem in acoustic modeling
  • 67. Problem in acoustic modeling
    • Limitations of HMMs for modeling speech parameters
      • Piece-wise constant statistics
      •  Statistics do not vary within an HMM state
      • Frame-wise conditional independence assumption
      •  State output probability depends only on the current state
  • 68. Problem in acoustic modeling
    • Limitations of HMMs for modeling speech parameters
      • Piece-wise constant statistics
      •  Statistics do not vary within an HMM state
      • Frame-wise conditional independence assumption
      •  State output probability depends only on the current state
      • Weak duration modeling
      •  State duration probability decreases exponentially with time
    1 2 3 4 5 6 7 8 State duration 0.1 0.2 0.3 0.4 0.5 State-duration prob.
  • 69. Problem in acoustic modeling
    • Limitations of HMMs for modeling speech parameters
      • Piece-wise constant statistics
      •  Statistics do not vary within an HMM state
      • Frame-wise conditional independence assumption
      •  State output probability depends only on the current state
      • Weak duration modeling
      •  State duration probability decreases exponentially with time
      • None of them hold for real speech data
      • Speech parameters are generated from acoustic models
      •  Better acoustic model may produce better speech
  • 70. 0 4 8 0 4 8 Frequency (kHz) Frequency (kHz) Natural Generated Over-smoothing problem
    • Speech parameter generation algorithm
      • Dynamic features make generated parameters smooth
      • Some times too smooth  Muffled speech
    • Why?
      • Fine structure of spectra are removed by averaging effect
      • Model underfitting
  • 71. Nitech-HTS 2005 for Blizzard Challenge
    • Blizzard Challenge
      • Open evaluation of TTS
      • Common database
      • From 2005
    • Nitech-HTS 2005
      • Achieved best naturalness & intelligibility
      • Still used as calibration system in Blizzard Challenge
      • Includes various enhancements
        • Advanced vocoding
      •  High-quality vocoder STRAIGHT
        • Better acoustic modeling
      •  Hidden semi-Markov model (HSMM)
        • Oversmoothing compensation
      •  Speech parameter generation algorithm considering GV
  • 72. Waveform Synthetic waveform STRAIGHT analysis Fixed-point analysis F0 extraction F0 adaptive spectral smoothing in the time-frequency region Analysis F0 Smoothed spectrum Aperiodic factors Mixed excitation with phase manipulation Synthesis
  • 73. Frequency [kHz] Power [dB] 120 6 4 2 100 80 60 40 20 0 -20 0 8 FFT power spectrum FFT + mel-cepstral analysis STRAIGHT + mel-cepstral analysis Mel-cepstrum from STRAIGHT spectrum
  • 74. Pulse excitation Weighting Phase manipulation F0 Noise excitation Aperiodicity Weighting Mixed excitation -4 0 4 8 0 Amplitude 0 -0.2 -0.4 0 2 4 6 8 Frequency [kHz] Power [dB] -4 0 4 8 0 Amplitude -15 -25 -35 0 2 4 6 8 Frequency [kHz] Power [dB] -5 -4 0 4 8 0 Amplitude Excitation generation in STRAIGHT
  • 75. Examples of generated excitation signals pulse/noise STRAIGHT
  • 76. HSMM-based modeling
    • Hidden semi-Markov model (HSMM)
      • Can be viewed as HMM with state-duration probability distributions
      • State-output, state-duration, & state-transition probability distributions can be estimated simultaneously
  • 77. Generated trajectory has smaller GV Parameter generation considering GV Global variance  intra-utterance var of speech params 1 0 -1 2nd mel-cepstral coefficient 0 1 2 3 time [sec] Natural Generated
  • 78.
    • Objective function of standard parameter generation
    • Objective function of parameter generation with GV
      • 1 st term is same as standard one
      • 2 nd term works as a penalty term for oversmoothing
    Parameter generation considering GV
  • 79. Parameter generation considering GV 0 4 8 Frequency (kHz) 0 4 8 Frequency (kHz) 0 4 8 Frequency (kHz) Natural Generated (standard) Generated (w / GV)
  • 80. 2.0 3.1 2.0 3.2 3.2 3.1 4.7 4.7 Evaluations
  • 81. Recent works to improve quality
    • Vocoding
      • MELP-style / CELP-style excitation
      • LF model
      • Sinusoidal models
    • Acoustic model
      • Segment models, trajectory models
      • Model combination (product of experts)
      • Minimum generation error training
      • Bayesian modeling
    • Oversmoothing
      • Pre & postfiltering
      • Improvements of GV
      • Hybrid approaches
      • & more…
  • 82. Other challenging topics
    • Keynote speech by Simon King (CSTR) in SSW7 this year
    • Speech synthesis is easy, if ...
      • voice is built offline & carefully checked for errors
      • speech is recorded in clean conditions
      • word transcriptions are correct
      • accurate phonetic labels are available or can be obtained
      • speech is in the required language & speaking style
      • speech is from a suitable speaker
      • a native speaker is available, preferably a linguist
    • Speech synthesis is not easy if we don’t have right data
  • 83. Other challenging topics
    • Non-professional speakers
      • AVM + adaptation (CSTR)
    • Too little speech data
      • VTLN-based rapid speaker adaptation (Titech, IDIAP)
    • Noisy recordings
      • Spectral subtraction & AVM + adaptation (CSTR)
    • No labels
      • Un- / Semi-supervised voice building (CSTR, NICT, CMU, Toshiba)
    • Insufficient knowledge of the language or accent
      • Letter (grapheme)-based synthesis (CSTR)
      • No prosodic contexts (CSTR, Titech)
    • Wrong language
      • Cross-lingual speaker adaptation (MSRA, EMIME)
      • Speaker & language adaptive training (Toshiba)
  • 84. Outline
    • Fundamentals of HMM-based speech synthesis
      • Mathematical formulation
      • Implementation of individual components
      • Examples demonstrating its flexibility
    • Recent works
      • Vocoding
      • Acoustic modeling
      • Oversmoothing
      • Other challenging research topics
    • Conclusions
  • 85.
    • HMM-based speech synthesis
      • Getting popular in speech synthesis area
      • Implementation
      • Flexible to change its voice characteristics
      • Its quality is improving
      • There are many challenging research topics
    Summary
  • 86. TTS research is attractive for you!!
    • Text analysis
      • Part-of-speech tagging, Grapheme-to-Phoneme conversion, etc.
    • Vocoding
      • Better signal processing, better speech representation
    • Statistical modeling
      • Robust modeling in ASR, knowledge from machine learning
     Natural language processing  Signal processing, speech coding  ASR, machine learning text analysis vocoding statistical modeling
  • 87. Please join TTS research! Thanks!

×