首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 515 毫秒
1.
Markov Chain Monte Carlo methods made possible estimation of parameters for complex random regression test‐day models. Models evolved from single‐trait with one set of random regressions to multiple‐trait applications with several random effects described by regressions. Gibbs sampling has been used for models with linear (with respect to coefficients) regressions and normality assumptions for random effects. Difficulties associated with implementations of Markov Chain Monte Carlo schemes include lack of good practical methods to assess convergence, slow mixing caused by high posterior correlations of parameters and long running time to generate enough posterior samples. Those problems are illustrated through comparison of Gibbs sampling schemes for single‐trait random regression test‐day models with different model parameterizations, different functions used for regressions and posterior chains of different sizes. Orthogonal polynomials showed better convergence and mixing properties in comparison with ‘lactation curve’ functions of the same number of parameters. Increasing the order of polynomials resulted in smaller number of independent samples for covariance components. Gibbs sampling under hierarchical model parameterization had a lower level of autocorrelation and required less time for computation. Posterior means and standard deviations of genetic parameters were very similar for chains of different size (from 20 000 to 1 000 000) after convergence. Single‐trait random regression models with large data sets can be analysed by Markov Chain Monte Carlo methods in relatively short time. Multiple‐trait (lactation) models are computationally more demanding and better algorithms are required.  相似文献   

2.
Bayesian estimation via Gibbs sampling, REML, and Method R were compared for their empirical sampling properties in estimating genetic parameters from data subject to parental selection using an infinitesimal animal model. Models with and without contemporary groups, random or nonrandom parental selection, two levels of heritability, and none or 15% randomly missing pedigree information were considered. Nonrandom parental selection caused similar effects on estimates of variance components from all three methods. When pedigree information was complete, REML and Bayesian estimation were not biased by nonrandom parental selection for models with or without contemporary groups. Method R estimates, however, were strongly biased by nonrandom parental selection when contemporary groups were in the model. The bias was empirically shown to be a consequence of not fully accounting for gametic phase disequilibrium in the subsamples. The joint effects of nonrandom parental selection and missing pedigree information caused estimates from all methods to be highly biased. Missing pedigree information did not cause biased estimates in random mating populations. Method R estimates usually had greater mean square errors than did REML and Bayesian estimates.  相似文献   

3.
Multiple‐trait and random regression models have multiplied the number of equations needed for the estimation of variance components. To avoid inversion or decomposition of a large coefficient matrix, we propose estimation of variance components by Monte Carlo expectation maximization restricted maximum likelihood (MC EM REML) for multiple‐trait linear mixed models. Implementation is based on full‐model sampling for calculating the prediction error variances required for EM REML. Performance of the analytical and the MC EM REML algorithm was compared using a simulated and a field data set. For field data, results from both algorithms corresponded well even with one MC sample within an MC EM REML round. The magnitude of the standard errors of estimated prediction error variances depended on the formula used to calculate them and on the MC sample size within an MC EM REML round. Sampling variation in MC EM REML did not impair the convergence behaviour of the solutions compared with analytical EM REML analysis. A convergence criterion that takes into account the sampling variation was developed to monitor convergence for the MC EM REML algorithm. For the field data set, MC EM REML proved far superior to analytical EM REML both in computing time and in memory need.  相似文献   

4.
Monte Carlo (MC) methods have been found useful in estimation of variance parameters for large data and complex models with many variance components (VC), with respect to both computer memory and computing time. A disadvantage has been a fluctuation in round‐to‐round values of estimates that makes the estimation of convergence challenging. Furthermore, with Newton‐type algorithms, the approximate Hessian matrix might have sufficient accuracy, but the inaccuracy in the gradient vector exaggerates the round‐to‐round fluctuation to intolerable. In this study, the reuse of the same random numbers within each MC sample was used to remove the MC fluctuation. Simulated data with six VC parameters were analysed by four different MC REML methods: expectation‐maximization (EM), Newton–Raphson (NR), average information (AI) and Broyden's method (BM). In addition, field data with 96 VC parameters were analysed by MC EM REML. In all the analyses with reused samples, the MC fluctuations disappeared, but the final estimates by the MC REML methods differed from the analytically calculated values more than expected especially when the number of MC samples was small. The difference depended on the random numbers generated, and based on repeated MC AI REML analyses, the VC estimates were on average non‐biased. The advantage of reusing MC samples is more apparent in the NR‐type algorithms. Smooth convergence opens the possibility to use the fast converging Newton‐type algorithms. However, a disadvantage from reusing MC samples is a possible “bias” in the estimates. To attain acceptable accuracy, sufficient number of MC samples need to be generated.  相似文献   

5.
We developed a Bayesian analysis approach by using a variational inference method, a so‐called variational Bayesian method, to determine the posterior distributions of variance components. This variational Bayesian method and an alternative Bayesian method using Gibbs sampling were compared in estimating genetic and residual variance components from both simulated data and publically available real pig data. In the simulated data set, we observed strong bias toward overestimation of genetic variance for the variational Bayesian method in the case of low heritability and low population size, and less bias was detected with larger population sizes in both methods examined. The differences in the estimates of variance components between the variational Bayesian and the Gibbs sampling were not found in the real pig data. However, the posterior distributions of the variance components obtained with the variational Bayesian method had shorter tails than those obtained with the Gibbs sampling. Consequently, the posterior standard deviations of the genetic and residual variances of the variational Bayesian method were lower than those of the method using Gibbs sampling. The computing time required was much shorter with the variational Bayesian method than with the method using Gibbs sampling.  相似文献   

6.
Bayesian analysis via Gibbs sampling, restricted maximum likelihood (REML), and Method R were used to estimate variance components for several models of simulated data. Four simulated data sets that included direct genetic effects and different combinations of maternal, permanent environmental, and dominance effects were used. Parents were selected randomly, on phenotype across or within contemporary groups, or on BLUP of genetic value. Estimates by Bayesian analysis and REML were always empirically unbiased in large data sets. Estimates by Method R were biased only with phenotypic selection across contemporary groups; estimates of the additive variance were biased upward, and all the other estimates were biased downward. No empirical bias was observed for Method R under selection within contemporary groups or in data without contemporary group effects. The bias of Method R estimates in small data sets was evaluated using a simple direct additive model. Method R gave biased estimates in small data sets in all types of selection except BLUP. In populations where the selection is based on BLUP of genetic value or where phenotypic selection is practiced mostly within contemporary groups, estimates by Method R are likely to be unbiased. In this case, Method R is an alternative to single-trait REML and Bayesian analysis for analyses of large data sets when the other methods are too expensive to apply.  相似文献   

7.
Mixed model (co)variance component estimates by REML and Gibbs sampling for two traits were compared for base populations and control lines of Red Flour Beetle (Tribolium castaneum). Two base populations (1296 records in the first replication, 1292 in the second) were sampled from laboratory stock. Control lines were derived from corresponding base populations with random selection and mating for 16 generations. The REML estimate of each (co)variance component for both pupa weight and family size was compared with the mean and 95% central interval of the particular (co)variance estimated by Gibbs sampling with three different weights on the given priors: ‘flat’, smallest, and 3.7% degrees of belief. Results from Gibbs sampling showed that flat priors gave a wider and more skewed marginal posterior distribution than the other two weights on priors for all parameters. In contrast, the 3.7% degree of belief on priors provided reasonably narrow and symmetric marginal posterior distributions. Estimation by REML does not have the flexibility of changing the weight on prior information as does the Bayesian analysis implemented by Gibbs sampling. In general, the 95% central intervals from the three different weights on priors in the base populations were similar to those in control lines. Most REML estimates in base populations differed from REML estimates in control lines. Insufficient information from the data, and confounding of random effects contributed to the variability of REML estimates in base populations. Evidence is presented showing that some (co)variance components were estimated with less precision than others. Results also support the hypothesis that REML estimates were equivalent to the joint mode of posterior distribution obtained from a Bayesian analysis with flat priors, but only when there was sufficient information from data, and no confounding among random effects.  相似文献   

8.
Frequentist and Bayesian approaches to scientific inference in animal breeding are discussed. Routine methods in animal breeding (selection index, BLUP, ML, REML) are presented under the hypotheses of both schools of inference, and their properties are examined in both cases. The Bayesian approach is discussed in cases in which prior information is available, prior information is available under certain hypotheses, prior information is vague, and there is no prior information. Bayesian prediction of genetic values and genetic parameters are presented. Finally, the frequentist and Bayesian approaches are compared from a theoretical and a practical point of view. Some problems for which Bayesian methods can be particularly useful are discussed. Both Bayesian and frequentist schools of inference are established, and now neither of them has operational difficulties, with the exception of some complex cases. There is software available to analyze a large variety of problems from either point of view. The choice of one school or the other should be related to whether there are solutions in one school that the other does not offer, to how easily the problems are solved, and to how comfortable scientists feel with the way they convey their results.  相似文献   

9.
Volumes of official data sets have been increasing rapidly in the genetic evaluation using the Japanese Black routine carcass field data. Therefore, an alternative approach with smaller memory requirement to the current one using the restricted maximum likelihood (REML) and the empirical best linear unbiased prediction (EBLUP) is desired. This study applied a Bayesian analysis using Gibbs sampling (GS) to a large data set of the routine carcass field data and practically verified its validity in the estimation of breeding values. A Bayesian analysis like REML‐EBLUP was implemented, and the posterior means were calculated using every 10th sample from 90 000 of samples after 10 000 samples discarded. Moment and rank correlations between breeding values estimated by GS and REML‐EBLUP were very close to one, and the linear regression coefficients and the intercepts of the GS on the REML‐EBLUP estimates were substantially one and zero, respectively, showing a very good agreement between breeding value estimation by the current GS and the REML‐EBLUP. The current GS required only one‐sixth of the memory space with REML‐EBLUP. It is confirmed that the current GS approach with relatively small memory requirement is valid as a genetic evaluation procedure using large routine carcass data.  相似文献   

10.
The Markov chain Monte Carlo (MCMC) strategy provides remarkable flexibility for fitting complex hierarchical models. However, when parameters are highly correlated in their posterior distributions and their number is large, a particular MCMC algorithm may perform poorly and the resulting inferences may be affected. The objective of this study was to compare the efficiency (in terms of the asymptotic variance of features of posterior distributions of chosen parameters, and in terms of computing cost) of six MCMC strategies to sample parameters using simulated data generated with a reaction norm model with unknown covariates as an example. The six strategies are single-site Gibbs updates (SG), single-site Gibbs sampler for updating transformed (a priori independent) additive genetic values (TSG), pairwise Gibbs updates (PG), blocked (all location parameters are updated jointly) Gibbs updates (BG), Langevin-Hastings (LH) proposals, and finally Langevin-Hastings proposals for updating transformed additive genetic values (TLH). The ranking of the methods in terms of asymptotic variance is affected by the degree of the correlation structure of the data and by the true values of the parameters, and no method comes out as an overall winner across all parameters. TSG and BG show very good performance in terms of asymptotic variance especially when the posterior correlation between genetic effects is high. In terms of computing cost, TSG performs best except for dispersion parameters in the low correlation scenario where SG was the best strategy. The two LH proposals could not compete with any of the Gibbs sampling algorithms. In this study it was not possible to find an MCMC strategy that performs optimally across the range of target distributions and across all possible values of parameters. However, when the posterior correlation between parameters is high, TSG, BG and even PG show better mixing than SG.  相似文献   

11.
In the analysis of large amounts of data to obtain BLUP, large sets of mixed model equations must be solved iteratively, which can involve considerable computing time. In real life, solutions are required only periodically for breeders to choose the best individuals, so that computing time is not usually a limiting factor. In simulation studies involving evaluation of individuals by BLUP, many rounds of evaluation are required for each simulated population. Since several or many replicates are usually required to obtain an accurate result from stochastic simulations, computing time can become a limiting factor. One of the factors that can drastically affect computing time in iterative methods is the criterion for ceasing iteration, or convergence criterion (CC). With iterative methods, a disadvantage can be that the rate of convergence can be slow, or under certain circumstances not converge at all. Nevertheless, when the system converges, the more stringent the CC, the more accurate the solutions. The more stringent the CC the more iterations and hence more computing time is required. The objectives of this study were to investigate how much response to selection is affected by the stringency of the CC and how much reranking occurs among selected individuals at different levels of the convergence criteria. These explorations provided a profile analysis of the computing time spent for each of the major subroutines in the BBSIM program.  相似文献   

12.
SUMMARY: Computing properties of better derivative and derivative-free algorithms were compared both theoretically and practically. Assuming that the log-likelihood function is approximately quadratic, in a t-trait analysis the number of steps to achieve convergence increases as t(2) in 'better' derivative-free algorithms and is independent of that number in 'better' derivative algorithms. The cost of one step increases as t(3) . Consequently, both classes of algorithms have a similar computational cost for single-trait models. In multiple traits, the computing costs increase as t(3) and t(5) , respectively. The derivative-free algorithms have worse numerical properties. Four programs were used to obtain one-, two-, and three-trait REML estimates from field data. Compared to single-trait analyses, the cost of one run for derivative-free algorithms increased by 27-40 times for two traits and 152-686 times for three traits. A similar increase in rounds of iteration for a derivative algorithm reached 5 and 21, and 1.8 and 2.2 in canonical transformation. Convergence and estimates of derivative algorithms were more predictable and, unlike derivative-free algorithms, were much less dependent on the choice of priors. Well-implemented derivative REML algorithms are less expensive and more reliable in multiple traits than derivative-free ones. ZUSAMMENFASSUNG: Vergleich von Rechen (Computing) merkmalen von abgeleiteten und ableitungsfreien Algorithmen zur Varianzkomponentensch?tzung mittels REML Rechenmerkmale von verbesserten ableitungsfreien und Algorithmen, die Ableitung benutzen, werden theoretisch und praktisch verglichen. Unter der Annahme einer ungef?hr quadratischen log-likelihood Funktion, nimmt in der Analyse von t Merkmalen die Zahl der Rechenschritte bis zu Konvergenz mit t(2) in 'besseren' ableitungsfreien Algorithmen zu und ist davon unabh?ngig von dieser Zahl in der 'besseren' Ableitung. Die Kosten je Schritt steigen mit t(3) . Daher haben beide Berechnungsarten für Einzelmerkmale ?hnliche Rechenkosten. Bei mehreren Merkmalen steigen die Kosten mit t(3) bzw. t(5) und ableitungsfreie Algorithmen haben schlechtere numerische Eigenschagten. Vier Programme haben für ein-, zwei- und drei-Merkmale REML Sch?tzungen von Felddaten erzeugt. Im Vergleich zu Ein-Merkmal Analysen stiegen Kosten für einen Lauf bei ableitungsfreien Algorithmen um das 27-40 fache bei zwei- und um das 152-686 fache bei drei-Merkmalen. Die Steigerungen je Lauf bei auf Ableitung beruhenden Algorithmen waren 5-21 fach und 1.8 und 2.2 fach bei kanonischer Transformation. Konvergenz und Sch?tzwerte von Algorithmen mit Ableitung waren besser vorhersagbar und weniger von der Wahl der priors beeinflu?t. Gut ausgestattete REML Methoden, die Ableitungen benutzen, sind ?konomischer und verl??licher bei Mehrmerkmalsproblemen als ableitungsfreie.  相似文献   

13.
This data set consisted of over 29 245 field records from 24 herds of registered Nelore cattle born between 1980 and 1993, with calves sires by 657 sires and 12 151 dams. The records were collected in south‐eastern and midwestern Brazil and animals were raised on pasture in a tropical climate. Three growth traits were included in these analyses: 205‐ (W205), 365‐ (W365) and 550‐day (W550) weight. The linear model included fixed effects for contemporary groups (herd‐year‐season‐sex) and age of dam at calving. The model also included random effects for direct genetic, maternal genetic and maternal permanent environmental (MPE) contributions to observations. The analyses were conducted using single‐trait and multiple‐trait animal models. Variance and covariance components were estimated by restricted maximum likelihood (REML) using a derivative‐free algorithm (DFREML) for multiple traits (MTDFREML). Bayesian inference was obtained by a multiple trait Gibbs sampling algorithm (GS) for (co)variance component inference in animal models (MTGSAM). Three different sets of prior distributions for the (co)variance components were used: flat, symmetric, and sharp. The shape parameters (ν) were 0, 5 and 9, respectively. The results suggested that the shape of the prior distributions did not affect the estimates of (co)variance components. From the REML analyses, for all traits, direct heritabilities obtained from single trait analyses were smaller than those obtained from bivariate analyses and by the GS method. Estimates of genetic correlations between direct and maternal effects obtained using REML were positive but very low, indicating that genetic selection programs should consider both components jointly. GS produced similar but slightly higher estimates of genetic parameters than REML, however, the greater robustness of GS makes it the method of choice for many applications.  相似文献   

14.
This work focuses on the effects of variable amount of genomic information in the Bayesian estimation of unknown variance components associated with single‐step genomic prediction. We propose a quantitative criterion for the amount of genomic information included in the model and use it to study the relative effect of genomic data on efficiency of sampling from the posterior distribution of parameters of the single‐step model when conducting a Bayesian analysis with estimating unknown variances. The rate of change of estimated variances was dependent on the amount of genomic information involved in the analysis, but did not depend on the Gibbs updating schemes applied for sampling realizations of the posterior distribution. Simulation revealed a gradual deterioration of convergence rates for the locations parameters when new genomic data were gradually added into the analysis. In contrast, the convergence of variance components showed continuous improvement under the same conditions. The sampling efficiency increased proportionally to the amount of genomic information. In addition, an optimal amount of genomic information in variance–covariance matrix that guaranty the most (computationally) efficient analysis was found to correspond a proportion of animals genotyped ***0.8. The proposed criterion yield a characterization of expected performance of the Gibbs sampler if the analysis is subject to adjustment of the amount of genomic data and can be used to guide researchers on how large a proportion of animals should be genotyped in order to attain an efficient analysis.  相似文献   

15.
A simulation study was conducted to assess the influence of differences in the length of individual testing periods on estimates of (co)variance components of a random regression model for daily feed intake of growing pigs performance tested between 30 and 100 kg live weight. A quadratic polynomial in days on test with fixed regressions for sex, random regressions for additive genetic and permanent environmental effects and a constant residual variance was used for a bivariate simulation of feed intake and daily gain. (Co)variance components were estimated for feed intake only by means of a Bayesian analysis using Gibbs sampling and restricted maximum likelihood (REML). A single trait random regression model analogous to the one used for data simulation was used to analyse two versions of the data: full data sets with 18 weekly means of feed intake per animal and reduced data sets with the individual length of testing periods determined when tested animals reached 100 kg live weight. Only one significant difference between estimates from full and reduced data (REML estimate of genetic covariance between linear and quadratic regression parameters) and two significant differences from expected values (Gibbs estimates of permanent environmental variance of quadratic regression parameters) occurred. These differences are believed to be negligible, as the number lies within the expected range of type I error when testing at the 5% level. The course of test day variances calculated from estimates of additive genetic and permanent environmental covariance matrices also supports the conclusion that no bias in estimates of (co)variance components occurs due to the individual length of testing periods of performance‐tested growing pigs. A lower number of records per tested animal only results in more variation among estimates of (co)variance components from reduced compared with full data sets. Compared with the full data, the effective sample size of Gibbs samples from the reduced data decreased to 18% for residual variance and increased up to five times for other (co)variances. The data structure seems to influence the mixing of Gibbs chains.  相似文献   

16.
The genetic evaluation using the carcass field data in Japanese Black cattle has been carried out employing an animal model, implementing the restricted maximum likelihood (REML) estimation of additive genetic and residual variances. Because of rapidly increasing volumes of the official data sets and therefore larger memory spaces required, an alternative approach like the REML estimation could be useful. The purpose of this study was to investigate Gibbs sampling conditions for the single-trait variance component estimation using the carcass field data. As prior distributions, uniform and normal distributions and independent scaled inverted chi-square distributions were used for macro-environmental effects, breeding values, and the variance components, respectively. Using the data sets of different sizes, the influences of Gibbs chain length and thinning interval were investigated, after the burn-in period was determined using the coupling method. As would be expected, the chain lengths had obviously larger effects on the posterior means than those of thinning intervals. The posterior means calculated using every 10th sample from 90 000 of samples after 10 000 samples discarded as burn-in period were all considered to be reasonably comparable to the corresponding estimates by REML.  相似文献   

17.
The accessibility of Markov Chain Monte Carlo (MCMC) methods for statistical inference have improved with the advent of general purpose software. This enables researchers with limited statistical skills to perform Bayesian analysis. Using MCMC sampling to do statistical inference requires convergence of the MCMC chain to its stationary distribution. There is no certain way to prove convergence; it is only possible to ascertain when convergence definitely has not been achieved. These methods are rather subjective and not implemented as automatic safeguards in general MCMC software. This paper considers a pragmatic approach towards assessing the convergence of MCMC methods illustrated by a Bayesian analysis of the Hui–Walter model for evaluating diagnostic tests in the absence of a gold standard. The Hui–Walter model has two optimal solutions, a property which causes problems with convergence when the solutions are sufficiently close in the parameter space. Using simulated data we demonstrate tools to assess the convergence and mixing of MCMC chains using examples with and without convergence. Suggestions to remedy the situation when the MCMC sampler fails to converge are given. The epidemiological implications of the two solutions of the Hui–Walter model are discussed.  相似文献   

18.
Simulated horse data were used to compare multivariate estimation of genetic parameters and prediction of breeding values (BV) for categorical, continuous and molecular genetic data using linear animal models via residual maximum likelihood (REML) and best linear unbiased prediction (BLUP) and mixed linear-threshold animal models via Gibbs sampling (GS). Simulation included additive genetic values, residuals and fixed effects for one continuous trait, liabilities of four binary traits, and quantitative trait locus (QTL) effects and genetic markers with different recombination rates and polymorphism information content for one of the liabilities. Analysed data sets differed in the number of animals with trait records and availability of genetic marker information. Consideration of genetic marker information in the model resulted in marked overestimation of the heritability of the QTL trait. If information on 10,000 or 5,000 animals was used, bias of heritabilities and additive genetic correlations was mostly smaller, correlation between true and predicted BV was always higher and identification of genetically superior and inferior animals was - with regard to the moderately heritable traits, in many cases - more reliable with GS than with REML/BLUP. If information on only 1,000 animals was used, neither GS nor REML/BLUP produced genetic parameter estimates with relative bias 50% for all traits. Selection decisions for binary traits should rather be based on GS than on REML/BLUP breeding values.  相似文献   

19.
Longevity is important in pig production with respect to both economic and ethical aspects. Direct selection for longevity might be ineffective because ‘true’ longevity can only be recorded when a sow has been culled or died. Thus, indirect selection for longevity using information from other traits that can be recorded early in life and are genetically correlated with longevity might be an alternative. Leg conformation has been included in many breeding schemes for a number of years. However, proving that leg conformation traits are good early indicators for longevity still remains. Our aim was to study genetic associations between leg conformation traits of young (5 months; 100 kg) Swedish Yorkshire pigs in nucleus herds and longevity traits of sows in nucleus and multiplier herds. Data included 97 533 animals with information on conformation (Movement and Overall score) recorded at performance testing and 26 962 sows with information on longevity. The longevity traits were as follows: stayability from 1st to 2nd parity, lifetime number of litters and lifetime number of born alive piglets. Genetic analyses were performed with both linear models using REML and linear‐threshold models using Bayesian methods. Heritabilities estimated using the Bayesian method were higher than those estimated using REML, ranging from 0.10 to 0.24 and 0.07 to 0.20, respectively. All estimated genetic correlations between conformation and longevity traits were significant and favourable. Heritabilities and genetic correlations between conformation and longevity indicate that selection on leg conformation should improve sow longevity.  相似文献   

20.
We demonstrated that supernodal techniques were more efficient than traditional methods for factorization and inversion of a coefficient matrix of mixed model equations (MME), which are often required in residual maximum likelihood (REML). Supernodal left‐looking and inverse multifrontal algorithms were employed for sparse factorization and inversion, respectively. The approximate minimum degree or multilevel nested dissection was used for ordering. A new computer package, Yet Another MME Solver (yams ), was developed and compared with fspak with respect to computing time and size of temporary memory for 13 test matrices. The matrices were produced by fitting animal models to dairy data and by using simulations from sire, sire–maternal grand sire, maternal and dominance models for phenotypic data and animal model for genomic data. The order of matrices ranged from 32 840 to 1 048 872. The yams software factorized and inverted the matrices up to 13 and 10 times faster than fspak , respectively, when an appropriate ordering strategy was applied. The yams package required at most 282 MB and 512 MB of temporary memory for factorization and inversion, respectively. Processing time per iteration in average information REML was reduced, using yams . The yams package is freely available on request by contacting the corresponding author.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号