[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN118245385B - Test method, test platform, equipment, medium and product - Google Patents

Test method, test platform, equipment, medium and product Download PDF

Info

Publication number
CN118245385B
CN118245385B CN202410667427.0A CN202410667427A CN118245385B CN 118245385 B CN118245385 B CN 118245385B CN 202410667427 A CN202410667427 A CN 202410667427A CN 118245385 B CN118245385 B CN 118245385B
Authority
CN
China
Prior art keywords
test
target
item
knowledge data
failure item
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410667427.0A
Other languages
Chinese (zh)
Other versions
CN118245385A (en
Inventor
窦志冲
张百林
徐国振
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Metabrain Intelligent Technology Co Ltd
Original Assignee
Suzhou Metabrain Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Metabrain Intelligent Technology Co Ltd filed Critical Suzhou Metabrain Intelligent Technology Co Ltd
Priority to CN202410667427.0A priority Critical patent/CN118245385B/en
Publication of CN118245385A publication Critical patent/CN118245385A/en
Application granted granted Critical
Publication of CN118245385B publication Critical patent/CN118245385B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/368Test management for test version control, e.g. updating test cases to a new software version
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3692Test management for test results analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The embodiment of the application discloses a testing method, a testing platform, equipment, a medium and a product, relates to the technical field of computers, and can reduce the time occupation of related personnel and shorten the testing period of software products. The method comprises the following steps: in response to receiving a test start instruction, executing a first round of test to obtain a first test result; repairing a target failure item in the first test result, wherein the target failure item comprises a test failure item caused by at least one of a driving configuration abnormality and a test environment configuration abnormality; after the target failure item is repaired, executing a second round of test on the target failure item to obtain a second test result; and generating a test report according to the first test result and the second test result, wherein the test report is used for performing exception repair and regression testing by a user.

Description

Test method, test platform, equipment, medium and product
Technical Field
The present application relates to the field of computer technologies, and in particular, to a testing method, a testing platform, a device, a medium, and a product.
Background
At present, testing of software products such as an operating system and the like is mainly realized by a test platform. The test platform automatically executes a round of test for all the test items and gathers the test results of all the test items to generate a test report. And repairing the related abnormality according to the test report by the user, and performing a regression test after the abnormality is repaired.
However, the test results obtained by the test platform executing the test are often affected by various factors, such as test failure items caused by abnormality of the software product itself, or test failure items caused by abnormality of the driving configuration, abnormality of the test environment configuration, and the like. The test platform reports the test failure items together through a test report so as to lead the user to carry out exception repair and regression test, thus occupying a great deal of time of the user and prolonging the test period of the software product.
Disclosure of Invention
The embodiment of the application aims to provide a testing method, a testing platform, equipment, a medium and a product, which can reduce the time occupation of a user and shorten the testing period of a software product.
In order to solve the above technical problems, in a first aspect, an embodiment of the present application provides a testing method, applied to a testing platform, the method including:
in response to receiving a test start instruction, executing a first round of test to obtain a first test result;
Repairing a target failure item in the first test result, wherein the target failure item comprises a test failure item caused by at least one of a driving configuration abnormality and a test environment configuration abnormality;
After the target failure item is repaired, executing a second round of test on the target failure item to obtain a second test result;
And generating a test report according to the first test result and the second test result, wherein the test report is used for performing exception repair and regression testing by a user.
In a second aspect, an embodiment of the present application further provides a test platform, where the test platform includes a controller, a tester, and a target module, where:
The control machine is used for starting a first round of test in response to receiving a test starting instruction and sending a first instruction to the test machine;
The testing machine is used for responding to the first instruction, executing a first round of testing to obtain a first testing result and sending the first testing result to the control machine;
The control machine is further configured to send a target failure item in the first test result to the target module, where the target failure item includes a test failure item caused by at least one of a driving configuration exception and a test environment configuration exception;
The target module is used for repairing the target failure item and sending a second instruction to the controller after the target failure item is repaired;
The control machine is further used for starting a second round of test and sending a third instruction to the test machine in response to receiving the second instruction;
The testing machine is further used for responding to the third instruction, executing a second round of testing for the target failure item to obtain a second testing result, and sending the second testing result to the control machine;
The control machine is further used for generating a test report according to the first test result and the second test result, and the test report is used for performing exception repair and regression testing by a user.
In a third aspect, an embodiment of the present application further provides an electronic device, including a memory, a processor, and a computer program stored on the memory, where the processor executes the computer program to implement the test method according to the first aspect.
In a fourth aspect, embodiments of the present application also provide a computer readable storage medium having stored thereon a computer program/instruction which, when executed by a processor, implements the test method according to the first aspect.
In a fifth aspect, embodiments of the present application also provide a computer program product comprising a computer program/instruction which, when executed by a processor, implements the test method according to the first aspect.
According to the technical scheme, after the first round of test is completed, the test platform repairs the target failure item, so that the influence of factors such as driving configuration abnormality and test environment configuration abnormality on test results is avoided, after the repair is finished, a second round of test is executed on the target failure item, and a test report is generated by combining the results obtained by the two rounds of test, so that a great amount of time consumption caused by performing abnormality repair and regression test on the test failure item caused by the driving configuration abnormality or the test environment configuration abnormality can be avoided, and the test period of a software product is shortened.
Drawings
For a clearer description of embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described, it being apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to the drawings without inventive effort for those skilled in the art.
FIG. 1 is a schematic structural diagram of a test platform according to the related art;
FIG. 2 is a flow chart of a test method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an empirical knowledge classification according to an embodiment of the present application;
FIG. 4 is a schematic structural diagram of a test platform according to an embodiment of the present application;
FIG. 5 is a schematic diagram of an implementation process of a test method according to an embodiment of the present application;
FIG. 6 is a schematic structural diagram of a testing device according to an embodiment of the present application;
Fig. 7 is a schematic diagram of an electronic device according to an embodiment of the present application;
fig. 8 is a schematic diagram of a computer readable storage medium according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. Based on the embodiments of the present application, all other embodiments obtained by a person of ordinary skill in the art without making any inventive effort are within the scope of the present application.
The terms "comprising" and "having" and any variations thereof in the description and claims of the application and in the foregoing drawings are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements but may include other steps or elements not expressly listed.
In order to facilitate understanding of the technical scheme provided by the application, technical terms related to the embodiment of the application are explained below.
Component compatibility: refers to compatibility between operating systems and server components.
Complete machine compatibility: refers to the compatibility between an operating system and a server of a certain model.
Parts: or parts such as a central processing unit (Central Processing Unit, CPU), memory, network card, disk, or the like; the server means is a variety of components constituting a server hardware system, and generally includes a redundant array of independent disks (Redundant Array of INDEPENDENT DISKS, raid) card, a network card, a hard disk, a serial attached small computer system interface (SERIAL ATTACHED SMALL computer SYSTEM INTERFACE, SAS) card, and the like.
Program error (Bug): bug refers to an error, defect, or problem in a computer program that can cause the program to fail or produce incorrect results.
Items to test and promote Linux system (Linux Test Project, LTP): the system is an open source project and aims to provide a set of comprehensive testing tools and testing cases for a Linux system; the purpose of the LTP is to verify the reliability, robustness and stability of the Linux system kernel and its related functions.
Raid card: the hardware device is used for connecting and managing the disk drive to realize different levels of Raid; the main purpose of the Raid card is to improve the reliability, fault tolerance and system performance of the data.
SAS card: which is a hardware device for connecting and managing a Small Computer system interface (Small Computer SYSTEM INTERFACE, SCSI) hard disk drive connected through a serial connection; SAS cards provide a high-speed data transfer interface, support hot plug functionality, and are compatible with serial high technology configuration (SERIAL ADVANCED Technology Attachment, SATA) hard disks.
Drive (Driver): it is commonly referred to as a driver, which is a special set of software that provides the computer operating system with the instructions needed to communicate with a particular hardware device; the main purpose of the driver is to allow the software system of the computer to interact with the hardware device so that the hardware device can function properly as instructed by the operating system.
Network card: the Network interface card (Network INTERFACE CARD, NIC) is a kind of computer hardware, and is used to connect the computer with the Network; the network card enables the computer to send data into the network and receive data from other devices in the network.
Testing is the last line of defense to ensure product quality, and for a software product, the most important means to ensure product quality is to ensure the quality of the test on the software product.
At present, testing of software products such as an operating system and the like is mainly realized by means of a testing platform so as to ensure testing quality and testing efficiency of the software products. The test platform automatically executes a round of test for all the test items and gathers the test results of all the test items to generate a test report. The related personnel (i.e. the user) comb the test report again to locate the abnormality and repair the abnormality, and perform a regression test after the abnormality repair is completed.
Illustratively, referring to the schematic structural diagram of the related art test platform shown in fig. 1, the test platform is mainly composed of a controller and a tester, the tester includes a plurality of physical machines, and a plurality of Virtual Machines (VMs) may be created on a single physical Machine. The control machine is responsible for issuing test commands to enable the test machine to execute related tests, and is also provided with a database and a visual platform for storing and displaying test results. As can be seen from fig. 1, the current test platform is mainly responsible for completing the related test flow of the automated test, i.e. the test platform only has the related functions of performing the test, and summarizing and displaying the test results.
However, the test results obtained by the test platform executing the test are often affected by various factors, besides the software product itself is abnormal (i.e. the self-existing Bug), the factors such as abnormal driving configuration (e.g. lack of driving of a specific component) and abnormal testing environment configuration also cause test failure items, which can be actually regarded as false report bugs irrelevant to the software product itself, but the current test platform lacks the processing capability for the false report bugs, so that the false report bugs can be submitted to related personnel along with the test report to perform abnormal repair and regression test, thereby causing time waste for related personnel and prolonging the test period of the software product.
Taking an operating system as an example, in practical application, it is generally required to perform more frequent testing and publishing for different versions of the operating system, such as testing and publishing for versions updated in a fixed period, testing and publishing for innovative versions with new functions developed, testing and publishing for maintenance versions of the issued versions (i.e., versions of the operating system in use by clients), and the like; and unlike most software product tests (i.e., only testing in terms of interfaces, functions, and performance is needed), the operating system's test content is more complex and the test forms are more diverse; therefore, the current test for the operating system is usually implemented by a test platform, so as to ensure the test quality and efficiency, and further ensure the release efficiency and quality of the version of the operating system.
Specifically, after the image construction is completed on the version of the operating system to be tested, the test platform can be triggered to execute a round of automatic test, after all test items are tested, the test platform gathers the obtained test results to generate a test report, and sends the test report to the test responsible person so as to inform the test responsible person that the test is completed. The testing responsible person then combs the testing results of each testing item in the testing report, determines the abnormality (i.e. Bug) existing in the current tested operating system version, and correspondingly creates Bug in the defect management platform, and the Bug is handed to relevant research and development personnel for abnormality positioning and repairing according to the project management plan; after repair is complete, the relevant tester will perform regression testing on these bugs. After the regression testing is completed, relevant personnel write and sort the final test report and release the current version of the tested operating system.
However, due to the complexity of the content and the diversity of the form of the operating system test, the test platform often obtains more test failure items (i.e., false report bugs) caused by factors such as abnormal configuration of the test environment or abnormal configuration of the drive when executing the test, if the false report bugs are put into the test report, the test quality is affected and the time of the research personnel and the test personnel is wasted greatly if the test personnel and the research personnel perform the abnormal repair and regression test, the test period of the operating system version is prolonged, and the release efficiency and the quality of the operating system version are affected.
Aiming at the problems in the related art, the application provides a test scheme, aiming at test failure items caused by abnormal configuration of a test environment and abnormal driving configuration, repairing and retesting links are added for a test platform, and meaningless abnormal repairing and regression testing can be avoided for misinformation Bug by research personnel and testers, so that time waste for the research personnel and testers is reduced, the test period of software products is shortened, and the test quality is ensured.
The following describes in detail a testing method, a testing platform, a device, a medium and a product provided by the embodiments of the present application through some embodiments and application scenarios thereof with reference to the accompanying drawings.
In a first aspect, referring to fig. 2, a flowchart of a test method according to an embodiment of the present application is shown, where the method may include the following steps:
step S101: and executing a first round of test in response to receiving the test starting instruction to obtain a first test result.
In specific implementation, after preparing a software product to be tested, such as preparing a mirror image of an operating system version to be issued, a user sends a test start instruction to the test platform, and the test platform is triggered to automatically execute a first round of test on the software product to be tested. In the round of test, the test platform tests the software product to be tested according to all preset test items, for example, the software product to be tested is subjected to full-scale functional test, and each test failure item and each test success item obtained in the round of test are used as a first test result.
Step S102: and repairing the target failure item in the first test result.
The target failure item comprises a test failure item caused by at least one of a driving configuration abnormality and a test environment configuration abnormality.
In specific implementation, the test platform can directly use all test failure items in the first test result as target failure items so as to repair the drive configuration exception or the test environment configuration exception of all the test failure items.
Or the test platform can screen out partial test items which are more likely to be influenced by the driving configuration abnormality or the testing environment configuration abnormality from the first test result through log analysis, manual intervention and other modes as target failure items so as to repair the partial test items in the aspect of the driving configuration abnormality or the testing environment configuration abnormality in a targeted manner. For example, the test platform may target test items in the associated log that relate to specific information (e.g., information for a specific type of component or information for a specific type of test) as target failure items.
After determining the target failure item, the test platform can automatically repair the target failure item according to related repair means and required data (such as drive or environment parameters and the like) which are preset by a user and are specific to the drive configuration abnormality and the test environment configuration abnormality.
Step S103: and after the target failure item is repaired, executing a second round of test on the target failure item to obtain a second test result.
In particular implementations, the test platform automatically performs a second round of testing after the target failure item is finished being repaired. In the round of testing, the test platform retests (i.e. retests) the target failure item to obtain a second test result.
Step S104: and generating a test report according to the first test result and the second test result, wherein the test report is used for performing exception repair and regression testing by a user.
In specific implementation, the test platform can directly summarize the first test result and the second test result to generate a test report, so that a user (such as a tester or a developer) can compare the first test result with the second test result to avoid processing test failure items caused by abnormal driving configuration or abnormal testing environment configuration.
Or the test platform can reject the corresponding test failure item in the first test result according to the test success item in the second test result, namely, reject the test failure item which is completely caused by abnormal driving configuration or abnormal testing environment configuration, if the test item A fails in the first round of test and succeeds in the second round of test, reject the test failure item corresponding to the test item A from the first test result.
After the test platform completes the rejecting operation, the rest test failure items in the first test result and the test failure items in the second test result are summarized to generate a test report, so that the time consumption of analysis of the test report by a user is reduced, and the test efficiency is improved.
According to the technical scheme, after the first round of test is completed, the test platform repairs the target failure item, so that the influence of factors such as driving configuration abnormality and test environment configuration abnormality on test results is avoided, after the repair is finished, a second round of test is executed on the target failure item, and a test report is generated by combining the results obtained by the two rounds of test, so that a great amount of time consumption caused by performing abnormality repair and regression test on the test failure item caused by the driving configuration abnormality or the test environment configuration abnormality can be avoided, and the test period of a software product is shortened.
Optionally, after generating a test report according to the first test result and the second test result, the method further includes:
and sending the test report to the user through a preset information transmission mode (such as mail and the like), and/or displaying the test report to the user through a preset visual mode (such as a chart and the like).
Optionally, in one embodiment, the test platform may specifically repair the target failure term according to the following steps:
step S201: and under the condition that the target failure item comprises a first test failure item caused by abnormal driving configuration, acquiring target characteristic information related to the hardware equipment from a log of the first test failure item.
In a specific implementation, the test platform may determine whether each test failure item is a first test failure item caused by abnormal driving configuration by analyzing a log of each test failure item in the target failure items, for example, detecting whether the log includes information of a specific type of hardware device (such as a server component).
Under the condition that the first test failure item is found, the test platform acquires information such as a model, a state (such as a connection state) and the like related to the hardware equipment from a log of the first test failure item as target feature information.
Step S202: and under the condition that the target failure item comprises a second test failure item caused by abnormal configuration of the test environment, acquiring target characteristic information related to the test item from a log of the second test failure item.
In a specific implementation, the test platform may determine whether each test failure item is a second test failure item caused by abnormal configuration of the test environment by analyzing a log of each test failure item in the target failure item, for example, detecting whether the log contains information of a specific type of test item (for example, an LTP related test item, a compatibility related test item, or a basic function related test item). It is understood that in practical applications, the first test failure item and the second test failure item may be the same test failure item in the target test item.
Under the condition that the second test failure item is found, the test platform acquires information such as a test item identifier (such as a number), a feature word (which is descriptive information of the test item), a state (such as test success or failure) and the like from a log of the second test failure item as target feature information.
Step S203: and acquiring target restoration means from a pre-established failure item processing list according to the acquired target feature information.
In the implementation, a user can sort and collect the repairing means (such as the name and the configuration mode of the drive required to be reconfigured) when different hardware devices have the drive abnormality and the repairing means (such as the name and the configuration mode of the environment parameter required to be reconfigured) when different test items have the test environment configuration abnormality according to experience so as to establish a failure item processing list.
When repairing each test failure item in the target failure items, the test platform can acquire repairing means of corresponding hardware equipment or test items in the failure item processing list according to target characteristic information (describing characteristics of the hardware equipment or the test items), so as to obtain target repairing means.
Step S204: and acquiring target data required by executing the target restoration means from a pre-established database.
In particular implementations, a user may pre-build a database to store data (e.g., drive or environmental parameters, etc.) needed to perform each of the different repair actions. After the test platform acquires the target restoration means, the test platform can quickly acquire target data such as driving or environmental parameters and the like required by executing the target restoration means from a database.
Step S205: and repairing the target failure item by utilizing the target data.
In the embodiment, the application provides a specific implementation scheme for realizing automatic repair of the test platform by introducing a log analysis link and establishing a failure item processing list and a database, thereby ensuring the degree of automation of the test and improving the test efficiency.
Optionally, during the process of repairing the target failure item, the test platform may record the test failure item that has completed repairing, and record the test failure item that has not completed repairing because the associated repairing means is not found in the failure item processing list. Then, the test platform executes a second round of test on the test failure items which are completed with the repair, so as to avoid retesting of the test failure items which are not completed with the repair, thereby ensuring the test efficiency.
Optionally, in one embodiment, the failure term handling list is established by:
Step S301: respective first knowledge data related to drive configuration anomalies is collected.
Wherein the first knowledge data includes: each piece of first characteristic information related to the hardware equipment and each piece of first repairing means when the hardware equipment has abnormal driving configuration.
In specific implementation, a user can collect experience knowledge related to driving exception repair and sort each collected experience knowledge into first knowledge data corresponding to different hardware devices; or the user may input the collected experience knowledge to the test platform, and the test platform automatically collates the experience knowledge into each first knowledge data and stores the first knowledge data in a pre-configured database (which may be called a driver library). For example, for a network card of model a, the first knowledge data corresponding to the network card may include: first characteristic information such as a network card, a model A, a normal connection state and the like, and first restoration means such as a drive name A, a drive name B and the like.
Step S302: and collecting second knowledge data related to the configuration abnormality of the test environment.
Wherein the second knowledge data includes: and each second characteristic information related to the test item and each second repairing means when the test item has abnormal configuration of the test environment.
In specific implementation, a user can collect experience knowledge related to abnormal repair of a test environment and sort the collected experience knowledge into second knowledge data corresponding to different test items; or the user may input the collected experience knowledge to the test platform, and the test platform automatically collates the experience knowledge into each second knowledge data and stores the second knowledge data in a pre-configured database (which may be referred to as a knowledge base). For test item 1 in LTP, the corresponding second knowledge data may include: LTP, number 1, test failure and other first characteristic information, and environmental parameter name a and other second restoration means.
Step S303: and establishing the failure item processing list by carrying out cluster analysis on the first knowledge data and the second knowledge data.
In specific implementation, the test platform can perform cluster analysis on each first knowledge data and each second knowledge data through a K-means (K-means) clustering algorithm or a K-median (K-Medians) clustering algorithm and the like, determine the association relation between the characteristic information (namely, the first characteristic information and the second characteristic information) and the restoration means (namely, the first restoration means and the second restoration means), and establish a failure item processing list according to the association relation, so that the abnormal restoration range supported by the test platform is not limited by the abnormal condition described by the empirical data, and the abnormal restoration capability of the test platform is ensured.
In this embodiment, considering that various technicians accumulate many experience knowledge during the daily development and testing process, these experience knowledge can provide a key means for solving the false alarm Bug, but in some more complex tests (such as operating system tests), it is difficult to judge which test items may fail in advance, and there are different repairing means for different false alarm bugs, so it is difficult for the test platform to directly use these experience knowledge to implement automatic repairing. Therefore, the application divides the experience knowledge into two parts of hardware equipment and test items, and carries out finer granularity carding on the information of the two parts according to the two information types of characteristic information and restoration means, and the application refers to the schematic diagram of experience knowledge classification shown in fig. 3. The empirical knowledge (namely the first knowledge data and the second knowledge data) after being combed in the mode can be effectively used by the test platform, for example, the test platform can use the characteristic information as an index to realize the inquiry of each repairing means, so that an implementation basis is provided for realizing the automatic repairing of the driving configuration abnormality and the testing environment configuration abnormality.
Optionally, the establishing the failure term processing list by performing cluster analysis on the first knowledge data and the second knowledge data includes:
step S3031: and according to the similarity degree between the first knowledge data, determining the coordinates of sample points mapped by the first characteristic information and the first restoration means in the first knowledge data in a preset multidimensional space.
In a specific implementation, the dimension of the preset multidimensional space may be determined according to the number of information types to which the first knowledge data and the second knowledge data relate. To ensure reliability of subsequent grouping, when setting position coordinates of sample points, for any two first knowledge data, the higher the similarity degree of the two first knowledge data, the smaller the distance between the sample points mapped by each of the two first knowledge data (i.e., the first feature information and the first repairing means) in the preset multidimensional space is, the lower the similarity degree of the two first knowledge data, the larger the distance between the sample points mapped by each of the two first knowledge data in the preset multidimensional space is, so as to ensure that the higher the similarity degree is, the higher the probability that the sample points mapped by each of the information in each of the first knowledge data is divided into the same group is, and the higher the probability that the sample points mapped by each of the information in each of the first knowledge data with lower similarity degree is divided into different groups is.
Step S3032: and according to the similarity degree between the second knowledge data, determining coordinates of sample points mapped by the second characteristic information and the second restoration means in the preset multidimensional space.
In particular, in order to ensure the reliability of the subsequent grouping, when the coordinates of the sample points are set, the sample points associated with the second knowledge data and the sample points associated with the first knowledge data can be separated by a larger distance, so that the situation that the sample points associated with the first knowledge data and the second knowledge data exist in one grouping at the same time is reduced.
When the coordinates of the sample points are set, for any two second knowledge data, the higher the similarity degree of the two second knowledge data, the smaller the distance between the sample points mapped by each information (namely the second feature information and the second restoration means) in the two second knowledge data in the preset multidimensional space is, the higher the distance between the sample points mapped by each information in the two second knowledge data in the preset multidimensional space is, so as to ensure that the higher the probability that the sample points mapped by each information in each second knowledge data with higher similarity degree are divided into the same group is, and the higher the probability that the sample points mapped by each information in each second knowledge data with lower similarity degree are divided into different groups is.
For example, in the three-dimensional space (i.e., the preset multidimensional space) corresponding to the hardware device model number, the test item number and the repair means, the model A1 and the repair means C1 belong to the same first knowledge data (which can be regarded as having extremely high similarity to the first knowledge data to which both belong), the test item number B1 and the repair means C2 belong to the same second knowledge data (which can be regarded as having extremely high similarity to the second knowledge data to which both belong), the coordinates of the sample point mapped by the model A1 may be (1, 0), the coordinates of the sample point mapped by the repair means C1 may be (0, 1), the coordinates of the sample point mapped by the test item number B1 may be (0, 10), and the coordinates of the sample point mapped by the repair means C2 may be (0, 10).
Step S3033: and grouping the sample points according to the respective coordinates of the sample points in the preset multidimensional space to obtain a grouping result.
In a specific implementation, the distance (such as euclidean distance) between every two sample points may be calculated according to the respective coordinates of each sample point, and multiple sample points with relatively close distances are divided into the same group, so as to obtain a grouping result.
Step S3034: and establishing the failure item processing list according to the grouping result.
In specific implementation, the feature information (i.e., the first feature information or the second feature information) mapped by each sample point in the same group may be associated with a repairing means (i.e., the first repairing means or the second repairing means), so as to establish a failure term processing list.
In the embodiment, the method and the device for determining the sample point coordinates according to the similarity between the knowledge data can ensure the reliability of grouping results obtained by subsequent cluster analysis, and further ensure the success rate of abnormality repair of the test platform according to the failure item processing list.
Optionally, the grouping the sample points according to the coordinates of each sample point in the preset multidimensional space to obtain a grouping result includes:
step S401: and selecting sample points mapped by each of the k repair means from the sample points as initial clustering centers.
And k is the sum of the type number of the hardware devices respectively associated with the first knowledge data and the type number of the test items respectively associated with the second knowledge data, and the sample points mapped by the k repair means are respectively associated with different types of hardware devices or test items.
In the specific implementation, the sum of the types of the associated hardware equipment and the test items is used as the number (namely k) of the packets required to be divided, so that the influence on the cluster analysis efficiency due to overlarge packet number setting can be avoided on the basis of ensuring the reliability of the packet result. And the repairing means associated with the hardware devices or test items of different types are respectively used as an initial clustering center to be grouped, so that sample points mapped by the repairing means are favorably obtained in the subsequent iterative updating process, and the different grouping means respectively correspond to the hardware devices or test items of different types, thereby further ensuring the reliability of grouping results, and being convenient for establishing a finer failure item processing list for the hardware devices or test items of different types.
Step S402: and dividing each sample point into groups corresponding to k initial cluster centers according to the distances between the sample points and the k initial cluster centers.
In practice, for each sample point, the sample point may be divided into groups that are located closest to its cluster center.
Step S403: and iteratively updating the clustering centers and the sample points corresponding to each k groups by taking the distance between each sample point and the clustering center corresponding to the sample point as a target to obtain a grouping result.
In the implementation, after dividing all sample points into different groups each time, for the groups with changes, determining the clustering centers of the groups again, for example, taking the center points of the position distribution of each sample point in the groups as new clustering centers of the groups respectively, calculating the sum of distances between all sample points and the clustering centers of the groups to which the sample points belong, and judging whether the sum of distances is smaller than a set value. If the sum of the distances is smaller than a set value, determining a grouping result according to the current grouping condition; and if the sum of the distances is not smaller than the set value, grouping and dividing all the sample points again according to the determined clustering center. Repeating the steps until the sum of the distances calculated currently is smaller than a set value or the iteration number is reached, and stopping grouping division.
Wherein, the sum of the distances between the clustering centers of all the sample points and the groups to which the sample points belong can be determined by the following formula:
Where J represents the sum of distances between each of all sample points and the cluster center of the group to which it belongs (also referred to as intra-cluster square sum of all groups); k represents the number of packets; represents the sample point set corresponding to the ith group, x represents One sample point in (a); A cluster center representing the ith group; Represents x and Euclidean distance between them.
For n sample points constructed in a preset multidimensional space, for example, cluster analysis may be performed on the n sample points by:
(1) K sample points are selected from n sample points to serve as initial clustering centers respectively.
(2) And calculating the distances between the n sample points and the clustering center respectively.
(3) For each sample point of the n sample points, the sample point is merged into a group in which the cluster center that is the smallest distance apart from it is located.
(4) And re-calculating the clustering center of each group, re-dividing the groups for all sample points according to the new clustering center, and repeating the steps until the division condition of each group is not changed.
Optionally, in one embodiment, the repairing the target failure item in the first test result includes:
step S501: and sequencing all the test failure items in the target failure items, and adding a mark item after the last test failure item to obtain a target list.
In the implementation, the test platform may sort each test failure item in the target failure items according to the acquisition order of the test failure items or other sorting rules, and add a flag item after the last test failure item to obtain the target list.
Step S502: and sequentially selecting test failure items from the target list for repairing.
In the specific implementation, the test platform sequentially selects test failure items from the target list for repair according to the sequence from front to back.
Step S503: and when the mark item is selected, ending the restoration of the target failure item.
When the marking item is selected, the test platform indicates that the repair is executed on all the previous test failure items, and the test platform finishes repairing the target failure item and then starts a second round of test on the target failure item.
Referring to the schematic structural diagram of the test platform shown in fig. 4, the test method is exemplified by the test platform executing the operating system test. Referring to a schematic diagram of an implementation process of the test method shown in fig. 5, the implementation process includes:
(1) The controller starts full-scale functional testing (namely, first round testing) of the operating system in response to receiving the testing start instruction, and issues a first instruction to the testing machine.
The first instruction is an instruction for indicating to test all test items, such as a test instruction carrying a field "all".
(2) After the first round of testing is completed, the testing machine puts the testing result into the configured database.
The testing machine can also directly send the testing result to the target module, so that information interaction between the control machine and the target module is reduced.
(3) The control compares the test results to baseline data.
In this step, the test machine outputs a test result of a specific field such as 0 or 1 after executing the test item, and the control machine compares the test result with the baseline data (which describes whether the specific field represents the test success or failure) to know whether the corresponding test item is a test failure item or a test success item.
(4) The control machine puts the test success item into a database configured by the control machine, simultaneously displays the test success item through a configured visual platform, and sends the test failure item to a target module (also called a diagnosis processing module).
The application additionally configures a target module for the test platform, and the target module can enable the test platform to have the capability of diagnosing and repairing some simple defects of false alarm Bug or software products according to experience data (such as a driving library and a knowledge base) provided by a user; the target module and the control machine can be deployed on the same physical equipment or can be independently deployed on one physical equipment, and the deployment mode of the target module is not particularly limited.
(5) And the target module takes the test failure items sent by the controller as target failure items and repairs the test failure items.
In this step, the objective module ranks the test failure items sent from the controller, and adds a flag item (or called a special test item) after the last test failure item to obtain an objective list (or called a failure item list). And the target module sequentially takes out the test failure items from the target list according to the sequence from front to back for repairing.
For the currently-fetched test failure item, the target module extracts feature information (first feature information or second feature information) from the log of the test failure item, and performs data standardization processing on the extracted feature information to obtain a feature value, for example, the model A is processed to be 1 (i.e. the feature value).
The data normalization processing mode is the same as the data normalization processing mode of the clustering analysis stage, and includes numeric processing, normalization processing and the like of the data, and correspondingly, the association relationship between the characteristic information (namely the characteristic value) after the data normalization processing and the restoration means after the data normalization processing is recorded in the failure item processing list.
The target module obtains the repairing means (namely target repairing means) associated with the characteristic value from the failure item processing list, obtains target data required by executing the repairing means from the configured database, and repairs the test failure item by utilizing the target data. And the target module finishes repairing under the condition of taking out the special test item.
(6) After the repair is finished, the target module sends a second instruction to the controller, the controller is triggered to start a second round of test, and the controller correspondingly sends a third instruction to the tester.
The third instruction is an instruction for indicating to test the target failure item, such as a test instruction carrying the identifier of the target failure item.
In the implementation, in the process of repairing the target failure item, the target module can record the first identifier of the repaired test failure item through the database, record the second identifier of the test failure item which is not repaired because the related repairing means is not found, and then send the second instruction carrying the first identifier to the controller to trigger the controller to start the second round of test for the repaired test failure items, so that the test efficiency can be further improved.
(7) After the second round of test is completed, the control machine displays the test result through the visual platform.
In the step, the control opportunity carries out visual display on the test failure item and the test success item obtained by the second round of test, gathers the test results of the two rounds to generate a test report, and sends the test report to a mailbox of a test responsible person so that the test responsible person informs relevant personnel to repair and carry out regression test on the abnormality which cannot be repaired by the test platform.
In the above example, the test scheme provided by the application can effectively reduce the false report Bug ratio of the test platform processed by the user, so that the research personnel are free from solving meaningless Bug, and the test personnel are also free from regressing meaningless Bug, thereby greatly saving the time of the research personnel and the test personnel, shortening the test period and further improving the release efficiency and quality of the operating system version.
In a second aspect, referring to fig. 4, a schematic structural diagram of a test platform according to an embodiment of the present application is shown, where the test platform includes a controller, a tester, and a target module, where:
The control machine is used for starting a first round of test in response to receiving a test starting instruction and sending a first instruction to the test machine;
The testing machine is used for responding to the first instruction, executing a first round of testing to obtain a first testing result and sending the first testing result to the control machine;
The control machine is further configured to send a target failure item in the first test result to the target module, where the target failure item includes a test failure item caused by at least one of a driving configuration exception and a test environment configuration exception;
The target module is used for repairing the target failure item and sending a second instruction to the controller after the target failure item is repaired;
The control machine is further used for starting a second round of test and sending a third instruction to the test machine in response to receiving the second instruction;
The testing machine is further used for responding to the third instruction, executing a second round of testing for the target failure item to obtain a second testing result, and sending the second testing result to the control machine;
The control machine is further used for generating a test report according to the first test result and the second test result, and the test report is used for performing exception repair and regression testing by a user.
According to the technical scheme, after the first round of test is completed, the test platform repairs the target failure item, so that the influence of factors such as driving configuration abnormality and test environment configuration abnormality on test results is avoided, after the repair is finished, a second round of test is executed on the target failure item, and a test report is generated by combining the results obtained by the two rounds of test, so that a great amount of time consumption caused by performing abnormality repair and regression test on the test failure item caused by the driving configuration abnormality or the test environment configuration abnormality can be avoided, and the test period of a software product is shortened.
Optionally, the target module is further configured to perform the following steps:
Under the condition that the target failure item comprises a first test failure item caused by abnormal driving configuration, acquiring target characteristic information related to hardware equipment from a log of the first test failure item;
under the condition that the target failure item comprises a second test failure item caused by abnormal configuration of a test environment, acquiring target characteristic information related to the test item from a log of the second test failure item;
acquiring target restoration means from a pre-established failure item processing list according to the acquired target feature information;
Acquiring target data required by executing the target restoration means from a pre-established database;
And repairing the target failure item by utilizing the target data.
Optionally, the target module is further configured to perform the following steps:
Collecting first knowledge data related to drive configuration anomalies, the first knowledge data comprising: each piece of first characteristic information related to the hardware equipment and each piece of first repairing means when the hardware equipment has abnormal driving configuration;
collecting respective second knowledge data related to the test environment configuration anomaly, the second knowledge data comprising: each second characteristic information related to the test item and each second repairing means when the test item has abnormal configuration of the test environment;
And establishing the failure item processing list by carrying out cluster analysis on the first knowledge data and the second knowledge data.
Optionally, the target module is further configured to perform the following steps:
According to the similarity degree between the first knowledge data, determining coordinates of sample points mapped by the first characteristic information and the first repairing means in a preset multidimensional space;
According to the similarity degree between the second knowledge data, determining coordinates of sample points mapped in the preset multidimensional space by the second characteristic information and the second repairing means in the second knowledge data;
grouping the sample points according to the respective coordinates of the sample points in the preset multidimensional space to obtain a grouping result;
and establishing the failure item processing list according to the grouping result.
Optionally, the target module is further configured to perform the following steps:
selecting sample points mapped by k repair means from the sample points as initial clustering centers, wherein k is the sum of the number of types of hardware devices associated with the first knowledge data and the number of types of test items associated with the second knowledge data, and the sample points mapped by k repair means are associated with hardware devices or test items of different types;
Dividing each sample point into groups corresponding to k initial cluster centers according to the distances between the sample points and the k initial cluster centers;
And iteratively updating the clustering centers and the sample points corresponding to each k groups by taking the distance between each sample point and the clustering center corresponding to the sample point as a target to obtain a grouping result.
Optionally, the target module is further configured to perform the following steps:
sequencing all the test failure items in the target failure items, and adding a mark item after the last test failure item to obtain a target list;
sequentially selecting test failure items from the target list for repairing;
and when the mark item is selected, ending the restoration of the target failure item.
Optionally, the control machine is further configured to send the test report to the user through a preset information transmission manner, and/or is further configured to display the test report to the user through a preset visualization manner.
In a third aspect, an embodiment of the present application provides a testing apparatus applied to a testing platform, as shown in fig. 6, where the apparatus includes:
the first test module is used for responding to the received test starting instruction, executing a first round of test and obtaining a first test result;
The first repairing module is used for repairing a target failure item in the first test result, wherein the target failure item comprises a test failure item caused by at least one of driving configuration abnormality and test environment configuration abnormality;
the second test module is used for executing a second round of test on the target failure item after the target failure item is repaired, so as to obtain a second test result;
The first generation module is used for generating a test report according to the first test result and the second test result, and the test report is used for performing exception repair and regression test by a user.
Optionally, the apparatus further comprises:
The first acquisition module is used for acquiring target characteristic information related to the hardware equipment from a log of a first test failure item under the condition that the target failure item comprises the first test failure item caused by abnormal driving configuration;
the second acquisition module is used for acquiring target characteristic information related to the test item from a log of a second test failure item under the condition that the target failure item comprises the second test failure item caused by abnormal test environment configuration;
The first repair module includes:
The third acquisition module is used for acquiring target restoration means from a pre-established failure item processing list according to the acquired target feature information;
a fourth obtaining module, configured to obtain, from a database that is built in advance, target data required for executing the target repair means;
And the second repairing module is used for repairing the target failure item by utilizing the target data.
Optionally, the apparatus further comprises:
The first collection module is used for collecting first knowledge data related to driving configuration abnormality, and the first knowledge data comprises: each piece of first characteristic information related to the hardware equipment and each piece of first repairing means when the hardware equipment has abnormal driving configuration;
The second collecting module is used for collecting second knowledge data related to the configuration abnormality of the test environment, and the second knowledge data comprises: each second characteristic information related to the test item and each second repairing means when the test item has abnormal configuration of the test environment;
the first establishing module is used for establishing the failure item processing list through cluster analysis on the first knowledge data and the second knowledge data.
Optionally, the first establishing module includes:
The first determining module is used for determining coordinates of sample points mapped by each first characteristic information and each first repairing means in each first knowledge data in a preset multidimensional space according to the similarity degree between each first knowledge data;
the second determining module is used for determining coordinates of sample points mapped by each second characteristic information and each second repairing means in each second knowledge data in the preset multidimensional space according to the similarity degree between the second knowledge data;
The first grouping module is used for grouping each sample point according to the respective coordinates of each sample point in the preset multidimensional space to obtain a grouping result;
and the second establishing module is used for establishing the failure item processing list according to the grouping result.
Optionally, the first grouping module includes:
The first grouping sub-module is used for selecting sample points mapped by each of k repairing means from the sample points as initial clustering centers, wherein k is the sum of the type number of hardware devices associated with each of the first knowledge data and the type number of test items associated with each of the second knowledge data, and the sample points mapped by each of the k repairing means are respectively associated with different types of hardware devices or test items;
The second grouping sub-module is used for dividing each sample point into groups corresponding to k initial clustering centers according to the distances between the sample points and the k initial clustering centers;
And the third grouping sub-module is used for iteratively updating the clustering centers and the sample points corresponding to the k groups respectively with the aim of minimizing the distance between each sample point and the clustering center corresponding to the sample point to obtain a grouping result.
Optionally, the first repair module includes:
The first repairing sub-module is used for sequencing all the test failure items in the target failure items, and adding a mark item after the last test failure item to obtain a target list;
the second repairing sub-module is used for sequentially selecting test failure items from the target list to repair;
And the third repair sub-module is used for finishing repairing the target failure item when the mark item is selected.
Optionally, the apparatus further comprises at least one of:
the report sending module is used for sending the test report to the user through a preset information transmission mode;
and the report display module is used for displaying the test report to the user in a preset visual mode.
According to the technical scheme, after the first round of test is completed, the test platform repairs the target failure item, so that the influence of factors such as driving configuration abnormality and test environment configuration abnormality on test results is avoided, after the repair is finished, a second round of test is executed on the target failure item, and a test report is generated by combining the results obtained by the two rounds of test, so that a great amount of time consumption caused by performing abnormality repair and regression test on the test failure item caused by the driving configuration abnormality or the test environment configuration abnormality can be avoided, and the test period of a software product is shortened.
It should be noted that, the device embodiment is similar to the method embodiment, so the description is simpler, and the relevant places refer to the method embodiment.
The embodiment of the application also provides an electronic device, and referring to fig. 7, fig. 7 is a schematic diagram of the electronic device according to the embodiment of the application. As shown in fig. 7, the electronic device 100 includes: the memory 110 and the processor 120 are connected through a bus communication, and the memory 110 and the processor 120 store a computer program which can run on the processor 120, so as to realize the steps in the test method disclosed by the embodiment of the application.
The embodiment of the application also provides a computer readable storage medium, referring to fig. 8, and fig. 8 is a schematic diagram of the computer readable storage medium according to the embodiment of the application. As shown in fig. 8, a computer readable storage medium 200 has stored thereon a computer program/instruction 210, which computer program/instruction 210, when executed by a processor, implements the steps of the test method as disclosed in embodiments of the application.
Embodiments of the present application also provide a computer program product comprising a computer program/instruction which, when executed by a processor, implements the steps of the test method as disclosed in the embodiments of the present application.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described by differences from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other.
It will be apparent to those skilled in the art that embodiments of the present application may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the application may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, systems, apparatus, storage media and program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing has outlined rather broadly the more detailed description of a test method, test platform, apparatus, medium and article of manufacture, wherein the detailed description has been given for the purpose of illustrating the principles and embodiments of the present application and for the purpose of providing a better understanding of the method and core concepts of the present application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (15)

1. A method of testing, for use with a test platform, the method comprising:
in response to receiving a test start instruction, executing a first round of test to obtain a first test result;
Repairing a target failure item in the first test result, wherein the target failure item comprises a test failure item caused by at least one of a driving configuration abnormality and a test environment configuration abnormality;
After the target failure item is repaired, executing a second round of test on the target failure item to obtain a second test result;
generating a test report according to the first test result and the second test result, wherein the test report is used for performing exception repair and regression testing by a user;
the repairing the target failure item in the first test result comprises the following steps:
acquiring target restoration means from a pre-established failure item processing list according to target characteristic information acquired for a target failure item in the first test result;
Acquiring target data required by executing the target restoration means from a pre-established database;
Repairing the target failure item by utilizing the target data;
Wherein, the failure item processing list is established by the following steps:
Collecting first knowledge data related to drive configuration anomalies, the first knowledge data comprising: each piece of first characteristic information related to the hardware equipment and each piece of first repairing means when the hardware equipment has abnormal driving configuration;
collecting respective second knowledge data related to the test environment configuration anomaly, the second knowledge data comprising: each second characteristic information related to the test item and each second repairing means when the test item has abnormal configuration of the test environment;
And carrying out cluster analysis on each first knowledge data and each second knowledge data according to the coordinates of sample points mapped by each first knowledge data and each second knowledge data in a preset multidimensional space, and establishing the failure item processing list, wherein the coordinates of sample points mapped by each first knowledge data and each second knowledge data in the preset multidimensional space are determined according to the similarity degree between each first knowledge data and the similarity degree between each second knowledge data.
2. The test method of claim 1, wherein prior to repairing the target failure item in the first test result, the method further comprises:
Under the condition that the target failure item comprises a first test failure item caused by abnormal driving configuration, acquiring target characteristic information related to hardware equipment from a log of the first test failure item;
and under the condition that the target failure item comprises a second test failure item caused by abnormal configuration of the test environment, acquiring target characteristic information related to the test item from a log of the second test failure item.
3. The test method according to claim 2, wherein the establishing the failure term processing list according to coordinates of sample points mapped by the respective first knowledge data and the respective second knowledge data in a preset multidimensional space, performs cluster analysis on the respective first knowledge data and the respective second knowledge data, includes:
According to the similarity degree between the first knowledge data, determining coordinates of sample points mapped by the first characteristic information and the first repairing means in a preset multidimensional space;
According to the similarity degree between the second knowledge data, determining coordinates of sample points mapped in the preset multidimensional space by the second characteristic information and the second repairing means in the second knowledge data;
grouping the sample points according to the respective coordinates of the sample points in the preset multidimensional space to obtain a grouping result;
and establishing the failure item processing list according to the grouping result.
4. The method according to claim 3, wherein the grouping the sample points according to the coordinates of each sample point in the preset multidimensional space to obtain the grouping result includes:
selecting sample points mapped by k repair means from the sample points as initial clustering centers, wherein k is the sum of the number of types of hardware devices associated with the first knowledge data and the number of types of test items associated with the second knowledge data, and the sample points mapped by k repair means are associated with hardware devices or test items of different types;
Dividing each sample point into groups corresponding to k initial cluster centers according to the distances between the sample points and the k initial cluster centers;
And iteratively updating the clustering centers and the sample points corresponding to each k groups by taking the distance between each sample point and the clustering center corresponding to the sample point as a target to obtain a grouping result.
5. The method of claim 1, wherein repairing the target failure item in the first test result comprises:
sequencing all the test failure items in the target failure items, and adding a mark item after the last test failure item to obtain a target list;
sequentially selecting test failure items from the target list for repairing;
and when the mark item is selected, ending the restoration of the target failure item.
6. The method of any of claims 1-5, wherein after generating a test report based on the first test result and the second test result, the method further comprises:
And sending the test report to the user through a preset information transmission mode, and/or displaying the test report to the user through a preset visual mode.
7. The utility model provides a test platform, its characterized in that, test platform includes control machine, tester and target module, wherein:
The control machine is used for starting a first round of test in response to receiving a test starting instruction and sending a first instruction to the test machine;
The testing machine is used for responding to the first instruction, executing a first round of testing to obtain a first testing result and sending the first testing result to the control machine;
The control machine is further configured to send a target failure item in the first test result to the target module, where the target failure item includes a test failure item caused by at least one of a driving configuration exception and a test environment configuration exception;
The target module is used for repairing the target failure item and sending a second instruction to the controller after the target failure item is repaired;
The control machine is further used for starting a second round of test and sending a third instruction to the test machine in response to receiving the second instruction;
The testing machine is further used for responding to the third instruction, executing a second round of testing for the target failure item to obtain a second testing result, and sending the second testing result to the control machine;
The control machine is also used for generating a test report according to the first test result and the second test result, wherein the test report is used for the user to perform exception repair and regression test;
the target module is further configured to perform the steps of:
acquiring target restoration means from a pre-established failure item processing list according to target characteristic information acquired for a target failure item in the first test result;
Acquiring target data required by executing the target restoration means from a pre-established database;
Repairing the target failure item by utilizing the target data;
Wherein the target module is further configured to perform the steps of:
Collecting first knowledge data related to drive configuration anomalies, the first knowledge data comprising: each piece of first characteristic information related to the hardware equipment and each piece of first repairing means when the hardware equipment has abnormal driving configuration;
collecting respective second knowledge data related to the test environment configuration anomaly, the second knowledge data comprising: each second characteristic information related to the test item and each second repairing means when the test item has abnormal configuration of the test environment;
And carrying out cluster analysis on each first knowledge data and each second knowledge data according to the coordinates of sample points mapped by each first knowledge data and each second knowledge data in a preset multidimensional space, and establishing the failure item processing list, wherein the coordinates of sample points mapped by each first knowledge data and each second knowledge data in the preset multidimensional space are determined according to the similarity degree between each first knowledge data and the similarity degree between each second knowledge data.
8. The test platform of claim 7, wherein the target module is further configured to perform the steps of:
Under the condition that the target failure item comprises a first test failure item caused by abnormal driving configuration, acquiring target characteristic information related to hardware equipment from a log of the first test failure item;
and under the condition that the target failure item comprises a second test failure item caused by abnormal configuration of the test environment, acquiring target characteristic information related to the test item from a log of the second test failure item.
9. The test platform of claim 8, wherein the goal module is further configured to perform the steps of:
According to the similarity degree between the first knowledge data, determining coordinates of sample points mapped by the first characteristic information and the first repairing means in a preset multidimensional space;
According to the similarity degree between the second knowledge data, determining coordinates of sample points mapped in the preset multidimensional space by the second characteristic information and the second repairing means in the second knowledge data;
grouping the sample points according to the respective coordinates of the sample points in the preset multidimensional space to obtain a grouping result;
and establishing the failure item processing list according to the grouping result.
10. The test platform of claim 9, wherein the target module is further configured to perform the steps of:
selecting sample points mapped by k repair means from the sample points as initial clustering centers, wherein k is the sum of the number of types of hardware devices associated with the first knowledge data and the number of types of test items associated with the second knowledge data, and the sample points mapped by k repair means are associated with hardware devices or test items of different types;
Dividing each sample point into groups corresponding to k initial cluster centers according to the distances between the sample points and the k initial cluster centers;
And iteratively updating the clustering centers and the sample points corresponding to each k groups by taking the distance between each sample point and the clustering center corresponding to the sample point as a target to obtain a grouping result.
11. The test platform of claim 7, wherein the target module is further configured to perform the steps of:
sequencing all the test failure items in the target failure items, and adding a mark item after the last test failure item to obtain a target list;
sequentially selecting test failure items from the target list for repairing;
and when the mark item is selected, ending the restoration of the target failure item.
12. The test platform according to any of the claims 7-11, wherein the control unit is further configured to send the test report to the user by means of a preset information transmission means, and/or to display the test report to the user by means of a preset visualization means.
13. An electronic device comprising a memory, a processor and a computer program stored on the memory, wherein the processor executes the computer program to implement the test method of any one of claims 1 to 6.
14. A computer readable storage medium, on which a computer program/instruction is stored, which, when executed by a processor, implements the test method according to any one of claims 1 to 6.
15. A computer program product comprising computer programs/instructions which, when executed by a processor, implement the test method of any one of claims 1 to 6.
CN202410667427.0A 2024-05-28 2024-05-28 Test method, test platform, equipment, medium and product Active CN118245385B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410667427.0A CN118245385B (en) 2024-05-28 2024-05-28 Test method, test platform, equipment, medium and product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410667427.0A CN118245385B (en) 2024-05-28 2024-05-28 Test method, test platform, equipment, medium and product

Publications (2)

Publication Number Publication Date
CN118245385A CN118245385A (en) 2024-06-25
CN118245385B true CN118245385B (en) 2024-08-16

Family

ID=91560825

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410667427.0A Active CN118245385B (en) 2024-05-28 2024-05-28 Test method, test platform, equipment, medium and product

Country Status (1)

Country Link
CN (1) CN118245385B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115827451A (en) * 2022-11-30 2023-03-21 海尔优家智能科技(北京)有限公司 Method and device for detecting test defects, storage medium and electronic device
CN116010283A (en) * 2023-02-16 2023-04-25 中国工商银行股份有限公司 Test case repairing method, device, computer equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112363920B (en) * 2020-11-06 2024-09-20 广州品唯软件有限公司 Repair method and device for test cases, computer equipment and storage medium
CN115454860A (en) * 2022-09-19 2022-12-09 北京天融信网络安全技术有限公司 Automatic testing method and device, storage medium and electronic equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115827451A (en) * 2022-11-30 2023-03-21 海尔优家智能科技(北京)有限公司 Method and device for detecting test defects, storage medium and electronic device
CN116010283A (en) * 2023-02-16 2023-04-25 中国工商银行股份有限公司 Test case repairing method, device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN118245385A (en) 2024-06-25

Similar Documents

Publication Publication Date Title
CN110088744B (en) Database maintenance method and system
CN105930276A (en) Method and device for identifying failure reasons of test cases
CN112214369A (en) Hard disk fault prediction model establishing method based on model fusion and application thereof
CN109165170B (en) Method and system for automatic request test
US7930600B2 (en) Logical to physical connectivity verification in a predefined networking environment
CN112269697B (en) Equipment storage performance testing method, system and related device
CN111400121A (en) Server hard disk slot positioning and maintaining method
CN115794519A (en) Test method, test system, electronic device and readable storage medium
CN112817869A (en) Test method, test device, test medium, and electronic apparatus
CN111782532A (en) Software fault positioning method and system based on network abnormal node analysis
CN110765007A (en) Crash information online analysis method for android application
CN118245385B (en) Test method, test platform, equipment, medium and product
CN109857583A (en) A kind of processing method and processing device
CN111367782A (en) Method and device for automatically generating regression test data
CN107102938B (en) Test script updating method and device
CN111835566A (en) System fault management method, device and system
CN107423177A (en) The method of testing and device of a kind of SAS link
CN117573452A (en) Performance test method, apparatus, computer device, storage medium, and program product
CN115840560A (en) Management system for software development process
CN114138537A (en) Crash information online analysis method for android application
CN115185819A (en) System testing method, device, equipment and computer readable storage medium
CN114328262A (en) Method and device for monitoring sequencing data processing flow
CN114064510A (en) Function testing method and device, electronic equipment and storage medium
CN113986753A (en) Interface test method, device, equipment and storage medium
CN112612702A (en) Automatic testing method and device based on web

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant