[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN112860532A - Performance test method, device, equipment, medium and program product - Google Patents

Performance test method, device, equipment, medium and program product Download PDF

Info

Publication number
CN112860532A
CN112860532A CN202110229321.9A CN202110229321A CN112860532A CN 112860532 A CN112860532 A CN 112860532A CN 202110229321 A CN202110229321 A CN 202110229321A CN 112860532 A CN112860532 A CN 112860532A
Authority
CN
China
Prior art keywords
host platform
test
configuration information
environment
performance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110229321.9A
Other languages
Chinese (zh)
Other versions
CN112860532B (en
Inventor
宋国庆
耿萌
杲振刚
丛大杰
陈立珍
田湘玲
丛心怡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Agricultural Bank of China
Original Assignee
Agricultural Bank of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Agricultural Bank of China filed Critical Agricultural Bank of China
Priority to CN202110229321.9A priority Critical patent/CN112860532B/en
Publication of CN112860532A publication Critical patent/CN112860532A/en
Application granted granted Critical
Publication of CN112860532B publication Critical patent/CN112860532B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/22Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing
    • G06F11/2289Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing by configuration test
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3664Environments for testing or debugging software
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Test And Diagnosis Of Digital Computers (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The application provides a performance test method, which is characterized in that software configuration information of a host platform in a test environment is adjusted to be consistent with the software configuration information of the host platform in a production environment, a processor resource optimization mechanism of the host platform and a storage distribution optimization mechanism of a hotlist data file are established according to hardware configuration information of the host platform in the test environment and the configuration information of the host platform in the production environment, the host platform is subjected to performance test in the test environment, performance data of the host platform in the test environment are obtained, and accordingly the performance data of the host platform in the production environment is estimated. Therefore, the performance of the host platform in the production environment can be accurately predicted.

Description

Performance test method, device, equipment, medium and program product
Technical Field
The present application relates to the field of test technologies, and in particular, to a performance test, an apparatus, a device, a computer-readable storage medium, and a computer program product.
Background
The evolution and upgrade of software subsystems are continuously performed on IBM host (mainframe) computers, which are the main load-bearing platform of the banking core business system. In order to reduce the production risk of software subsystem upgrade or regular patch maintenance, various functions and performance tests at the system and application level need to be performed in a test environment.
The design and deployment of the host platform performance test environment refer to and simulate various conditions of the production environment, but compared with the production environment, the difference between the hardware resource configuration and the application architecture is large, the performance test environment is low in resource configuration and simple in application architecture, and the difference between the performance data and the production environment is large.
The performance testing effect on the host platform is not good, and it is urgently needed to provide a performance testing method capable of predicting the performance of the host platform.
Disclosure of Invention
The application provides a performance testing method. The method comprises the steps of adjusting software configuration information of a host platform in a test environment to be consistent with the software configuration information of the host platform in a production environment, establishing a processor resource optimization mechanism of the host platform and a storage distribution optimization mechanism of a hotlist data file according to hardware configuration information of the host platform in the test environment and the configuration information of the host platform in the production environment, carrying out performance test on the host platform in the test environment to obtain performance data of the host platform in the test environment, and estimating the performance data of the host platform in the production environment. Therefore, the performance of the host platform in the production environment can be accurately predicted. The application also provides a device, equipment, a computer readable storage medium and a computer program product corresponding to the method.
In a first aspect, the present application provides a performance testing method, comprising:
acquiring first software configuration information of a host platform in a test environment and second software configuration information of the host platform in a production environment, and adjusting the first software configuration information to enable the first software configuration information to be consistent with the second software configuration information;
acquiring first hardware configuration information of a host platform in a test environment and second hardware configuration information of the host platform in a production environment, and establishing a processor resource optimization mechanism of the host platform and a storage distribution optimization mechanism of a hotlist data file;
performing performance test on the host platform in a test environment to obtain first performance data of the host platform in the test environment;
and estimating second performance data of the host platform in the production environment according to the first performance data of the host platform in the test environment.
In some possible implementations, the method further includes:
determining a reference factor influencing the performance of the host platform according to the first hardware configuration information and the second hardware configuration information;
according to the first performance data of the host platform in the test environment, second performance data of the host platform in the production environment is pre-estimated, and the method comprises the following steps:
and estimating second performance data of the host platform in the production environment according to the first performance data of the host platform in the test environment and the reference factor.
In some possible implementations, the reference factor includes an input-output time.
In some possible implementations, establishing a processor resource optimization mechanism for a host platform includes:
and configuring a processor resource management strategy of the host platform, wherein the processor resource management strategy indicates that the utilization rate of the non-performance test system to the processor resource does not exceed a preset threshold value.
In some possible implementations, a storage distribution optimization mechanism for building a hotlist data file includes:
adopting a storage device with the same generation as the production environment in the test environment;
and migrating part of the hot list data files so as to uniformly distribute the hot list database files in the storage device.
In some possible implementations, the method further includes:
acquiring first application architecture information of a host platform in a test environment and second application architecture information of the host platform in a production environment, and deploying the test environment in an isomorphic mode according to the first application architecture information and the second application architecture information.
In some possible implementations, the method further includes:
the application transaction distribution route in the test environment is optimized such that the application transaction distribution route in the test environment is close to the application transaction distribution route in the production environment.
In a second aspect, the present application provides a performance testing device, comprising:
the adjusting unit is used for acquiring first software configuration information of the host platform in a test environment and second software configuration information of the host platform in a production environment, and adjusting the first software configuration information to enable the first software configuration information to be consistent with the second software configuration information;
the optimization unit is used for acquiring first hardware configuration information of the host platform in a test environment and second configuration information of the host platform in a production environment, and establishing a processor resource optimization mechanism of the host platform and a storage distribution optimization mechanism of the hotlist data file;
the testing unit is used for carrying out performance testing on the host platform in a testing environment to obtain first performance data of the host platform in the testing environment;
and the pre-estimating unit is used for estimating second performance data of the host platform in the production environment according to the first performance data of the host platform in the test environment.
In some possible implementations, the apparatus further includes:
the reference unit is used for determining a reference factor influencing the performance of the host platform according to the first hardware configuration information and the second hardware configuration information;
the estimation unit is specifically configured to:
and estimating second performance data of the host platform in the production environment according to the first performance data of the host platform in the test environment and the reference factor.
In some possible implementations, the reference factor includes an input-output time.
In some possible implementations, the optimization unit is specifically configured to:
and configuring a processor resource management strategy of the host platform, wherein the processor resource management strategy indicates that the utilization rate of the non-performance test system to the processor resource does not exceed a preset threshold value.
In some possible implementations, the optimization unit is specifically configured to:
adopting a storage device with the same generation as the production environment in the test environment;
and migrating part of the hot list data files so as to uniformly distribute the hot list database files in the storage device.
In some possible implementations, the apparatus further includes:
the deployment unit is used for acquiring first application architecture information of the host platform in the test environment and second application architecture information of the host platform in the production environment, and performing isomorphic deployment on the test environment according to the first application architecture information and the second application architecture information.
In some possible implementations, the apparatus further includes:
and the optimization subunit is used for optimizing the application transaction distribution route in the test environment so as to enable the application transaction distribution route in the test environment to be close to the application transaction distribution route in the production environment.
In a third aspect, the present application provides an apparatus comprising a processor and a memory. The processor and the memory communicate with each other. The processor is configured to execute instructions stored in the memory to cause the apparatus to perform a performance testing method as in the first aspect or any implementation of the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, in which instructions are stored, and the instructions instruct a device to execute the performance testing method according to the first aspect or any implementation manner of the first aspect.
In a fifth aspect, the present application provides a computer program product comprising instructions which, when run on a device, cause the device to perform the performance testing method of the first aspect or any of the implementations of the first aspect.
The present application can further combine to provide more implementations on the basis of the implementations provided by the above aspects.
According to the technical scheme, the embodiment of the application has the following advantages:
the embodiment of the application provides a performance testing method, which comprises the steps of obtaining software configuration information of a host platform in a testing environment and a production environment, adjusting the software configuration information of the testing environment to be consistent with the software configuration information of the production environment, establishing a processor resource optimization mechanism of the host platform and a storage distribution optimization mechanism of a hotlist data file according to hardware configuration information of the host platform in the testing environment and hardware configuration information of the host platform in the production environment, carrying out performance testing in the testing environment, obtaining performance data in the testing environment, and estimating the performance testing data of the host platform in the production environment. Therefore, the performance testing method capable of accurately predicting the performance of the host platform in the production environment is provided.
Drawings
In order to more clearly illustrate the technical method of the embodiments of the present application, the drawings needed to be used in the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained by those skilled in the art without inventive labor.
Fig. 1 is a schematic diagram of an application architecture of a production environment and a test environment according to an embodiment of the present disclosure;
fig. 2 is a flowchart of a performance testing method according to an embodiment of the present application;
FIG. 3 is a flow chart of another performance testing method provided by the embodiments of the present application;
FIG. 4 is a flowchart of comparing performance data of a production environment and a test environment according to an embodiment of the present disclosure;
FIG. 5 is a graph illustrating a comparison of response times of test data and production data provided in accordance with an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a second time comparison of test data and production data provided in an embodiment of the present application;
FIG. 7 is a schematic diagram of another performance testing method provided in the embodiments of the present application;
fig. 8 is a schematic structural diagram of a performance testing apparatus according to an embodiment of the present application.
Detailed Description
The scheme in the embodiments provided in the present application will be described below with reference to the drawings in the present application.
The terms "first" and "second" in the embodiments of the present application are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature.
Some technical terms referred to in the embodiments of the present application will be first described.
The host platform is the whole of a host parallel coupling system and host storage, and the host parallel coupling system is a computer cluster constructed by a plurality of host devices and couplers and has strong parallel processing capacity.
For the performance test of the host platform, the performance test is generally consistent with the production environment in terms of host software configuration, but there are cases that the host hardware resource configuration of the test environment is lower and the application architecture level is more simplified.
As shown in FIG. 1, a represents a schematic diagram of a production environment application architecture, and b represents a schematic diagram of a test environment application architecture. Because the difference of the performance data of the system operation of the test environment and the production environment is large, the performance data obtained by the test is difficult to truly and effectively reflect the performance of the test product after the production environment is put into operation, and hidden troubles are brought to the operation and maintenance of the production environment. The test product can be a host platform or software on the host platform.
In view of the above, the present application provides a performance testing method capable of predicting the performance of a host platform well, where the method can be executed by a testing device. The test device is a device with data processing capability, and may be, for example, a server, or a terminal device such as a desktop computer, a notebook computer, or a smart phone.
Specifically, the test equipment adjusts software configuration information of the host platform in the test environment to be consistent with the software configuration information of the host platform in the production environment, establishes a processor resource optimization mechanism of the host platform and a storage distribution optimization mechanism of the hotlist data file according to hardware configuration information of the host platform in the test environment and the configuration information of the host platform in the production environment, performs performance test on the host platform in the test environment to obtain performance data of the host platform in the test environment, and estimates the performance data of the host platform in the production environment. Therefore, the performance of the host platform in the production environment can be accurately predicted.
In order to facilitate understanding of the technical solution of the present application, the performance testing method provided by the present application is described below with reference to fig. 2.
S202: the test equipment acquires first software configuration information of the host platform in a test environment and second software configuration information of the host platform in a production environment, and adjusts the first software configuration information to enable the first software configuration information to be consistent with the second software configuration information.
The software configuration information may include software versions, software configuration parameters, and application versions.
And the test equipment adjusts the software version, the software configuration parameters, the application program version and the like in the first software configuration information according to the second software configuration information so as to be consistent with the second software configuration information.
Specifically, the software version may include software versions of an operating system, a database, middleware, a live component, tool software, and the like.
The software configuration parameters in the second software configuration information are configuration parameters of software such as an operating system, a database, middleware, a dual-active component, tool software and the like in the production environment. The test equipment performs text-level comparison on the software configuration parameters of the first software configuration information in the test environment and the software configuration parameters in the second software configuration information, and adjusts the software configuration parameters in the first software configuration information to make the software configuration parameters consistent with the second software configuration information.
And deploying and using the application program with the same version as that in the production environment in the test environment, so that the first software configuration parameters are consistent with the second software configuration parameters on the version of the application program.
S204: the test equipment acquires first hardware configuration information of the host platform in a test environment and second hardware configuration information of the host platform in a production environment, and establishes a processor resource optimization mechanism of the host platform and a storage distribution optimization mechanism of the hotlist data file.
The hardware configuration information of the host platform includes host hardware information and host storage information.
The host hardware information includes processor capacity and memory capacity.
In terms of processor capacity, the processor capacity configuration in the test environment is low and highly shared by multiple development test environments, while the production environment processor capacity configuration is high and less shared. In terms of memory capacity, the same cost of configuring a test environment and a production environment is relatively low, and the configuration is easy to implement, so the difference in processor capacity is mainly considered.
In a test environment, under the condition that the sufficiency of processor resources is not guaranteed, when the utilization rate of the processor resources in the running of high-concurrency application transaction is higher than 80%, longer waiting time for the processor is triggered, so that the transaction response time is further prolonged, wherein the transaction response time is an important performance data index of a host platform.
Therefore, the testing equipment establishes a processor resource optimization mechanism of the host platform, thereby ensuring the processor resources which can be used by the host platform in the testing environment, and reducing the influence of transaction response time.
The test equipment may establish a processor resource optimization mechanism of the host platform by configuring a processor resource management policy of the host platform, wherein the processor resource management policy indicates that a usage rate of processor resources by the non-test system does not exceed a preset threshold.
Specifically, the test device may respectively establish resource management groups corresponding to the test system and the non-test system in a hardware resource management platform of the host, for example, a resource group corresponding to the test system is denoted as resource group a, a resource group corresponding to the non-test system is denoted as resource group B, and the like. During the test, the maximum utilization rate of the processor resources of the resource group B is limited to 30%, and the use of the processor resources of the resource group A is not limited, namely, at least 70% of the processor resources in the system in the resource group A are not contended by other systems, so that the requirement of the processor resources during the test execution is guaranteed.
In the aspect of storage difference, the test equipment cards, compares the model number of the storage equipment, performance parameters, the using method of the storage equipment and distribution characteristics of high-frequency access files stored on the storage equipment. Wherein the high frequency access file represents a database system activity log file and a database hotlist data file.
In the aspect of storage device models and performance parameters, the production environment has obvious advantages, so that the test device determines the characteristics of high data input and output efficiency and short time of the production environment as reference factors influencing the performance of the host platform, and the second performance data of the host platform in the production environment can be conveniently predicted.
In the aspect of the use method of the storage device, on the basis that the same use method of the storage device is adopted in the production environment and the test environment, the production environment still has high storage redundancy and the cache hit rate of the storage device is higher, and the test device can determine the characteristics as reference factors which influence the performance of the host platform, so that the second performance data of the host platform in the production environment can be estimated. Wherein the reference factor comprises an input-output time.
In the aspect of distribution characteristics of high-frequency access files stored on storage devices, the production environment is continuously optimized for a long time, the high-frequency access files are uniformly distributed on a plurality of storage devices, and excessive waiting time caused by the fact that the high-frequency access files are gathered on a single storage device can be avoided. The deployment high-frequency access file of the test environment is divided into two cases, the first case is a static high-frequency access file such as a database activity log, and the file refers to the deployment of the production environment when the environment is deployed. The second is a dynamic high-frequency access file, such as a hotlist data file of a database, which is randomly distributed on each storage device when the environment is deployed, and uncertainty of more time consumption caused by high-frequency access of partial storage is easily generated.
Thus, the test equipment establishes a storage distribution optimization mechanism for the hot table data files of the host platform, thereby reducing the uncertainty of more time consumption caused by high frequency access of partial storage.
Specifically, the test equipment adopts the storage equipment with the same generation as the production environment in the test environment, and then migrates part of files in the hotlist data files, so that the hotlist database files are uniformly distributed in the storage equipment, and the time consumption is reduced.
S206: the test equipment performs performance test on the host platform in the test environment to obtain first performance data of the host platform in the test environment.
S208: the test equipment pre-estimates second performance data of the host platform in a production environment according to the first performance data of the host platform in the test environment.
Specifically, the test equipment estimates second performance data of the host platform in the production environment according to first performance data of the host platform in the test environment, a processor resource optimization mechanism of the host platform and a storage distribution optimization mechanism of the hotlist data file.
In some possible implementation manners, the test device estimates second performance data of the host platform in the production environment according to the first performance data of the host platform in the test environment, the reference factor, the processor resource optimization mechanism of the host platform and the storage distribution optimization mechanism of the hotlist data file.
In summary, the present disclosure provides a performance testing method, in which a testing device can adjust software configuration information of a testing environment according to software configuration information of a host platform in a production environment, so that the software configuration information of the testing environment is consistent with the production environment, and establish a processor resource optimization mechanism of the host platform and a storage distribution optimization mechanism of a hotlist data file according to a difference between hardware configuration information of the host platform in the production environment and hardware configuration information of the testing environment, so as to reduce a difference between the testing environment and the hardware, thereby pre-estimating performance data of the host platform in the production environment according to the performance data of the host platform in the testing environment, and implementing a performance test on the host platform.
The present application further provides another embodiment of the performance testing method provided by the present application, as shown in fig. 3, specifically including the following steps.
S302: the test equipment acquires first software configuration information of the host platform in a test environment and second software configuration information of the host platform in a production environment, and adjusts the first software configuration information to enable the first software configuration information to be consistent with the second software configuration information.
S304: the test equipment acquires first hardware configuration information of the host platform in a test environment and second hardware configuration information of the host platform in a production environment, and establishes a processor resource optimization mechanism of the host platform and a storage distribution optimization mechanism of the hotlist data file.
S306: the test equipment acquires first application architecture information of the host platform in a test environment and second application architecture information of the host platform in a production environment, and deploys the test environment in an isomorphic mode according to the first application architecture information and the second application architecture information.
The test equipment acquires first application architecture information of the host platform including application architecture configuration information, an application transaction distribution mechanism, transaction type proportion, transaction amount and the like in a test environment and second application architecture information of the host platform including the application architecture configuration information, the application transaction distribution mechanism, the transaction type proportion, the transaction amount and the like in a production environment. The application architecture comprises software and hardware of the host platform and host peripheral equipment. And the test equipment combs and compares the first application architecture information and the second application architecture information.
In the aspect of application architecture configuration information, the test equipment deploys the test environment isomorphically according to the software and hardware resource configuration condition of the test environment, and localizes the application operation logic.
In the aspect of an application transaction distribution mechanism, application transactions of a production environment are processed by a foreground terminal according to regions, load balancing equipment and appointed middleware systems of a host. And the application transaction of the test environment is transmitted to the host middleware system for processing through the load balancing equipment according to the sequence polling of the log numbers. In the two distribution mechanisms, the associated transactions of the production environment are in the memory of the same host, the local memory reading hit rate is higher, and the storage equipment can be prevented from being read frequently, so that the time is saved. The associated transactions of the test environment are randomly distributed into the memories of different hosts according to the log numbers, more accesses to the storage are generated, and therefore, the time is consumed. Thus, the test device optimizes the application transaction distribution route in the test environment such that the application transaction distribution route in the test environment is close to the application transaction distribution route in the production environment.
Specifically, in the production environment, each province application transaction respectively reaches the host middleware system for processing through a foreground terminal through load balancing equipment according to the public to private, and in the test environment, the application transaction is transmitted to the host middleware system for processing through the load balancing equipment according to log number polling. The time spent in the test environment is more.
In this embodiment, the test device splits the public-private distribution of the application transaction in the test environment according to the log number, and then polls and sends the split application transaction to the host middleware system for processing, so that the test environment and the production environment are close to each other in terms of application transaction distribution routing, and thus the transaction response time in the test environment can be effectively reduced.
In the aspect of transaction type proportion and transaction amount, the difference of the quantity and performance of each transaction type in the test environment and the production environment is analyzed by classifying the transactions according to business logic, resource consumption and the like. And the transaction amount is influenced by the correlation of the application channel, the ratio of unsuccessful transactions is low, and the influence on the system performance is small.
According to the experimental data, as shown in table 1, by means of the storage distribution optimization mechanism of the hotlist data file and the joint optimization of the application transaction distribution route transformation, the transaction response time can be reduced by 37.78%, and the gap between the transaction response time and the production environment is further reduced.
TABLE 1 application transaction distribution routing reform and storage distribution optimization experimental data
Figure BDA0002958370910000101
Figure BDA0002958370910000111
S308: the test equipment performs performance test on the host platform in the test environment to obtain first performance data of the host platform in the test environment.
S310: the test equipment pre-estimates second performance data of the host platform in a production environment according to the first performance data of the host platform in the test environment.
Specifically, the test equipment estimates second performance data of the host platform in the production environment according to first performance data of the host platform in the test environment, a processor resource optimization mechanism of the host platform, a storage distribution optimization mechanism of the hotlist data file and an application transaction distribution route optimization mechanism.
Therefore, the accuracy of the performance test can be further improved by optimizing the application architecture information.
In some possible implementations, the embodiment further includes comparing the performance data in the test environment with the performance data in the production environment in advance, as shown in fig. 4, and specifically includes the following steps.
S402: the test equipment combs operating system data in the performance data under the test environment.
The operating system data in the performance data under the test environment specifically includes processor utilization rate, memory utilization rate, coupler processor utilization rate and coupler memory utilization rate, and the operating system data in the four aspects is used for judging whether the resources of the host parallel coupling system reach a bottleneck.
S404: and the test equipment judges whether the performance data in the test environment meets the comparison condition.
Specifically, during the process of generating performance data in the test environment and the production environment, the application transaction behavior needs to be consistent, that is, the average number of database requests per transaction should be consistent. Meanwhile, in the time period selected by the performance data in the test environment, the conditions that the utilization rate of the processor is higher than 80% and the utilization rate of the coupler processor is higher than 30% do not exist, the conditions of insufficient memory of the processor, insufficient memory of the coupler, insufficient storage resources and the like do not exist, and abnormal events of the system and the application do not exist.
And when the performance data does not meet the comparison condition, reselecting the performance data in the test environment.
S406: and the test equipment combs the performance data of the middleware system in the performance data under the test environment.
The application transaction response time is the sum of the consumption time of the middleware on the host platform, and has important comparison value. As shown in fig. 5, the ratios of the response times of the test data to the production data are compared laterally.
S408: and the testing equipment combs the performance data of the database system in the performance data under the testing environment.
The time of the application transaction in the database system is mainly consumed on input and output, and the method has important comparison value. The ratio of the test data to the secondary time of the production run data is compared laterally as in fig. 6.
S410: the test equipment combs input and output influencing factors.
The hardware factors influencing input and output include response time of the storage device, cache hit rate of the storage device and the like, and the application architecture factors influencing input and output include hit rate of a host memory and a coupler memory and the like.
S412: the test equipment optimizes the test environment.
The optimization measures specifically include establishing a processor resource optimization mechanism, a reference factor and a storage distribution optimization mechanism of the hot list data file of the host platform in S204, isomorphic deployment of the test environment in S306, and the like.
S414: and comprehensively evaluating the performance data by the testing equipment.
Specifically, the test equipment brings the relevant numerical difference into the performance data index of important interest according to the optimization, for example, the input and output time of production in the reference factor is brought into the test, the response time is calculated and derived, and secondary evaluation is performed.
After targeted optimization, the characteristic difference of the performance data on the data is further reduced, and the method has a certain reference value.
For example, the test single trade average response time is 28.0 milliseconds, where the test single trade average I/O response time is 20.28 milliseconds and the production single trade average I/O response time is 8.78 milliseconds, then the predicted production single trade run time (test single trade average response time-test single trade average I/O response time) + production single trade average I/O response time (28.0-20.28) +8.78 ═ 16.5. The characteristic difference of the average response time of 16.9 milliseconds from the production of a single transaction is obviously reduced.
Therefore, the scheme provides a performance testing method, as shown in fig. 7, on the basis of keeping the software configuration information of the host platform in the testing environment consistent, the difference characteristics on the hardware configuration and the application architecture are extracted, and comprehensive improvement is performed, so that the difference between the testing environment and the hardware and application architecture level in the production environment is reduced, the testing environment is improved, and the performance data of the host under the production environment is estimated more accurately.
Furthermore, the numerical value of the difference characteristic point is brought into the performance data index which is focused on for secondary estimation, and the accuracy of the performance data in the production environment is further improved.
In accordance with the above method embodiment, the present application also provides a performance testing apparatus, referring to fig. 8, the apparatus 800 includes: an adjusting unit 802, an optimizing unit 804, a testing unit 806 and a pre-estimating unit 808.
An adjusting unit 802, configured to obtain first software configuration information of the host platform in the test environment and second software configuration information of the host platform in the production environment, and adjust the first software configuration information so that the first software configuration information is consistent with the second software configuration information;
the optimization unit 804 is configured to obtain first hardware configuration information of the host platform in the test environment and second configuration information of the host platform in the production environment, and establish a processor resource optimization mechanism of the host platform and a storage distribution optimization mechanism of the hotlist data file;
the testing unit 806 is configured to perform a performance test on the host platform in a testing environment, and obtain first performance data of the host platform in the testing environment;
the estimating unit 808 is configured to estimate second performance data of the host platform in the production environment according to the first performance data of the host platform in the test environment.
In some possible implementations, the apparatus further includes:
the reference unit is used for determining a reference factor influencing the performance of the host platform according to the first hardware configuration information and the second hardware configuration information;
the estimation unit 808 is specifically configured to:
and estimating second performance data of the host platform in the production environment according to the first performance data of the host platform in the test environment and the reference factor.
In some possible implementations, the reference factor includes an input-output time.
In some possible implementations, the optimization unit 804 is specifically configured to:
and configuring a processor resource management strategy of the host platform, wherein the processor resource management strategy indicates that the utilization rate of the non-performance test system to the processor resource does not exceed a preset threshold value.
In some possible implementations, the optimization unit 804 is specifically configured to:
adopting a storage device with the same generation as the production environment in the test environment;
and migrating part of the hot list data files so as to uniformly distribute the hot list database files in the storage device.
In some possible implementations, the apparatus further includes:
the deployment unit is used for acquiring first application architecture information of the host platform in the test environment and second application architecture information of the host platform in the production environment, and performing isomorphic deployment on the test environment according to the first application architecture information and the second application architecture information.
In some possible implementations, the apparatus further includes:
and the optimization subunit is used for optimizing the application transaction distribution route in the test environment so as to enable the application transaction distribution route in the test environment to be close to the application transaction distribution route in the production environment.
The application provides equipment for realizing a software version upgrade test management and control method. The apparatus includes a processor and a memory. The processor and the memory communicate with each other. The processor is configured to execute instructions stored in the memory to cause the device to perform the performance testing method.
The present application provides a computer-readable storage medium having instructions stored therein, which when run on a device, cause the device to perform the performance testing method described above.
The present application provides a computer program product comprising instructions which, when run on an apparatus, cause the apparatus to perform the performance testing method described above.
It should be noted that the above-described embodiments of the apparatus are merely schematic, where the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. In addition, in the drawings of the embodiments of the apparatus provided in the present application, the connection relationship between the modules indicates that there is a communication connection therebetween, and may be implemented as one or more communication buses or signal lines.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present application can be implemented by software plus necessary general-purpose hardware, and certainly can also be implemented by special-purpose hardware including special-purpose integrated circuits, special-purpose CPUs, special-purpose memories, special-purpose components and the like. Generally, functions performed by computer programs can be easily implemented by corresponding hardware, and specific hardware structures for implementing the same functions may be various, such as analog circuits, digital circuits, or dedicated circuits. However, for the present application, the implementation of a software program is more preferable. Based on such understanding, the technical solutions of the present application may be substantially embodied in the form of a software product, which is stored in a readable storage medium, such as a floppy disk, a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, an exercise device, or a network device) to execute the method according to the embodiments of the present application.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, training device, or data center to another website site, computer, training device, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that a computer can store or a data storage device, such as a training device, a data center, etc., that incorporates one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.

Claims (10)

1. A method of performance testing, the method comprising:
acquiring first software configuration information of a host platform in a test environment and second software configuration information of the host platform in a production environment, and adjusting the first software configuration information to enable the first software configuration information to be consistent with the second software configuration information;
acquiring first hardware configuration information of the host platform in the test environment and second hardware configuration information of the host platform in the production environment, and establishing a processor resource optimization mechanism of the host platform and a storage distribution optimization mechanism of a hotlist data file;
performing performance test on the host platform in the test environment to obtain first performance data of the host platform in the test environment;
and according to the first performance data of the host platform in the test environment, second performance data of the host platform in the production environment is estimated.
2. The method of claim 1, further comprising:
determining a reference factor influencing the performance of the host platform according to the first hardware configuration information and the second hardware configuration information;
the estimating second performance data of the host platform in the production environment according to the first performance data of the host platform in the test environment comprises:
and estimating second performance data of the host platform in the production environment according to the first performance data of the host platform in the test environment and the reference factor.
3. The method of claim 2, wherein the reference factor comprises an input-output time.
4. The method of any of claims 1 to 3, wherein establishing a processor resource optimization mechanism for the host platform comprises:
and configuring a processor resource management strategy of the host platform, wherein the processor resource management strategy indicates that the utilization rate of the processor resource by a non-performance test system does not exceed a preset threshold value.
5. The method according to any one of claims 1 to 3, wherein the mechanism for optimizing the storage distribution of the hot table data file comprises:
adopting a storage device with the same generation as the production environment in the test environment;
and migrating part of the files in the hotlist data file so as to uniformly distribute the hotlist database file in the storage device.
6. The method according to any one of claims 1 to 3, further comprising:
acquiring first application architecture information of the host platform in the test environment and second application architecture information of the host platform in the production environment, and deploying the test environment in a isomorphic manner according to the first application architecture information and the second application architecture information.
7. The method of claim 6, further comprising:
optimizing an application transaction distribution route in the test environment to approximate the application transaction distribution route in the test environment to the application transaction distribution route in the production environment.
8. A performance testing apparatus, the apparatus comprising:
the system comprises an adjusting unit, a test unit and a control unit, wherein the adjusting unit is used for acquiring first software configuration information of a host platform in a test environment and second software configuration information of the host platform in a production environment, and adjusting the first software configuration information so as to enable the first software configuration information to be consistent with the second software configuration information;
the optimization unit is used for acquiring first hardware configuration information of the host platform in the test environment and second configuration information of the host platform in the production environment, and establishing a processor resource optimization mechanism of the host platform and a storage distribution optimization mechanism of a hotlist data file;
the test unit is used for carrying out performance test on the host platform in the test environment to obtain first performance data of the host platform in the test environment;
and the pre-estimating unit is used for estimating second performance data of the host platform in the production environment according to the first performance data of the host platform in the test environment.
9. An apparatus, comprising a processor and a memory, the processor to execute instructions stored in the memory to cause the apparatus to perform the performance testing method of any of claims 1 to 7.
10. A computer-readable storage medium having instructions stored thereon that direct a device to perform the performance testing method of any of claims 1-7.
CN202110229321.9A 2021-03-02 2021-03-02 Performance test method, device, equipment, medium and program product Active CN112860532B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110229321.9A CN112860532B (en) 2021-03-02 2021-03-02 Performance test method, device, equipment, medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110229321.9A CN112860532B (en) 2021-03-02 2021-03-02 Performance test method, device, equipment, medium and program product

Publications (2)

Publication Number Publication Date
CN112860532A true CN112860532A (en) 2021-05-28
CN112860532B CN112860532B (en) 2023-06-30

Family

ID=75990935

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110229321.9A Active CN112860532B (en) 2021-03-02 2021-03-02 Performance test method, device, equipment, medium and program product

Country Status (1)

Country Link
CN (1) CN112860532B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113709448A (en) * 2021-08-05 2021-11-26 贵阳朗玛视讯科技有限公司 IPTV system-based testing device and method
CN116990660A (en) * 2023-06-25 2023-11-03 珠海妙存科技有限公司 eMMC aging test method, eMMC aging test device, electronic equipment and computer readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7099797B1 (en) * 2003-07-08 2006-08-29 Avanza Technologies, Inc. System and method of testing software and hardware in a reconfigurable instrumented network
CN103246606A (en) * 2013-04-26 2013-08-14 广东电网公司电力科学研究院 Method and system for testing performances of ESB (enterprises service bus) platform
CN110674009A (en) * 2019-09-10 2020-01-10 平安普惠企业管理有限公司 Application server performance monitoring method and device, storage medium and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7099797B1 (en) * 2003-07-08 2006-08-29 Avanza Technologies, Inc. System and method of testing software and hardware in a reconfigurable instrumented network
CN103246606A (en) * 2013-04-26 2013-08-14 广东电网公司电力科学研究院 Method and system for testing performances of ESB (enterprises service bus) platform
CN110674009A (en) * 2019-09-10 2020-01-10 平安普惠企业管理有限公司 Application server performance monitoring method and device, storage medium and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113709448A (en) * 2021-08-05 2021-11-26 贵阳朗玛视讯科技有限公司 IPTV system-based testing device and method
CN116990660A (en) * 2023-06-25 2023-11-03 珠海妙存科技有限公司 eMMC aging test method, eMMC aging test device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN112860532B (en) 2023-06-30

Similar Documents

Publication Publication Date Title
US10819603B2 (en) Performance evaluation method, apparatus for performance evaluation, and non-transitory computer-readable storage medium for storing program
CN112650576B (en) Resource scheduling method, device, equipment, storage medium and computer program product
CN110096336B (en) Data monitoring method, device, equipment and medium
US20190245756A1 (en) Performance adjustment method, apparatus for peformance adjustment, and non-transitory computer-readable storage medium for storing program
US20110231860A1 (en) Load distribution system
CN111324303B (en) SSD garbage recycling method, SSD garbage recycling device, computer equipment and storage medium
CN112162891B (en) Performance test method in server cluster and related equipment
US9396087B2 (en) Method and apparatus for collecting performance data, and system for managing performance data
CN112860532B (en) Performance test method, device, equipment, medium and program product
CN111338779B (en) Resource allocation method, device, computer equipment and storage medium
CN111562889A (en) Data processing method, device, system and storage medium
CN101957778B (en) Software continuous integration method, device and system
CN110297743B (en) Load testing method and device and storage medium
CN111562884B (en) Data storage method and device and electronic equipment
CN110716875A (en) Concurrency test method based on feedback mechanism in domestic office environment
CN111858656A (en) Static data query method and device based on distributed architecture
CN112133357A (en) eMMC testing method and device
US20200356454A1 (en) Method and computer storage node of shared storage system for abnormal behavior detection/analysis
CN110347546A (en) Monitor task dynamic adjusting method, device, medium and electronic equipment
CN115480867A (en) Method and device for estimating time consumed by thermal migration and computer equipment
CN115079958A (en) Multi-node load balancing cold and hot data migration device, method, terminal and medium
US20170115895A1 (en) Method and apparatus for big size file blocking for distributed processing
CN114490405A (en) Resource demand determination method, device, equipment and storage medium
US20140359104A1 (en) Grouping processing method and system
CN113051143A (en) Detection method, device, equipment and storage medium for service load balancing server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant