[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN111340173A - Method and system for training generation countermeasure network for high-dimensional data and electronic equipment - Google Patents

Method and system for training generation countermeasure network for high-dimensional data and electronic equipment Download PDF

Info

Publication number
CN111340173A
CN111340173A CN201911343340.3A CN201911343340A CN111340173A CN 111340173 A CN111340173 A CN 111340173A CN 201911343340 A CN201911343340 A CN 201911343340A CN 111340173 A CN111340173 A CN 111340173A
Authority
CN
China
Prior art keywords
tensor
train
dimensional data
generative adversarial
adversarial network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911343340.3A
Other languages
Chinese (zh)
Inventor
周阳
张涌
宁立
王书强
许宜诚
文森特·周
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN201911343340.3A priority Critical patent/CN111340173A/en
Publication of CN111340173A publication Critical patent/CN111340173A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

本申请涉及一种针对高维数据的生成对抗网络训练方法、系统及电子设备。包括:步骤a:搭建生成对抗网络骨干结构;步骤b:使用张量火车分解算法对所述生成对抗网络骨干结构进行张量火车分解;步骤c:使用真实高维数据训练一个基于张量火车分解的张量自编码器,通过所述张量自编码器输出真实高维数据的空间结构特征;步骤d:将所述张量自编码器的输出和判别器生成的最后一层特征相结合作为最后一层的输入,对生成对抗网络进行训练。本申请使生成对抗网络具备直接生成高维数据的能力,同时减少网络的参数量,使网络的训练更加稳定,并提高生成数据的质量和多样性。

Figure 201911343340

The present application relates to a generative adversarial network training method, system and electronic device for high-dimensional data. Including: step a: building a generative adversarial network backbone structure; step b: using tensor train decomposition algorithm to perform tensor train decomposition on the generative adversarial network backbone structure; step c: using real high-dimensional data to train a tensor train decomposition based The tensor auto-encoder of , outputs the spatial structure features of the real high-dimensional data through the tensor auto-encoder; Step d: Combine the output of the tensor auto-encoder and the last layer feature generated by the discriminator as The input of the last layer to train the generative adversarial network. The present application enables the generative adversarial network to directly generate high-dimensional data, while reducing the amount of parameters of the network, making the training of the network more stable, and improving the quality and diversity of the generated data.

Figure 201911343340

Description

一种针对高维数据的生成对抗网络训练方法、系统及电子 设备A Generative Adversarial Network Training Method, System and Electronics for High-Dimensional Data equipment

技术领域technical field

本申请属于生成对抗网络训练技术领域,特别涉及一种针对高维数据的生成对抗网络训练方法、系统及电子设备。The present application belongs to the technical field of generative adversarial network training, and in particular relates to a generative adversarial network training method, system and electronic device for high-dimensional data.

背景技术Background technique

生成对抗网络的主要作用是用一个假的分布通过不断地学习去拟合真实分布,随着深度学习的发展,生成对抗网络可以用于生成类似于真实世界的东西,例如:图片、音乐、文章等。目前,生成对抗网络广泛应用于数据集增强、生成人脸照片、医学图像转换、超分辨率等领域,它的发展潜力十分巨大。The main role of the generative adversarial network is to use a fake distribution to fit the real distribution through continuous learning. With the development of deep learning, the generative adversarial network can be used to generate things similar to the real world, such as: pictures, music, articles Wait. At present, generative adversarial networks are widely used in data set enhancement, face photo generation, medical image conversion, super-resolution and other fields, and its development potential is huge.

申请号“201910287274.6”的发明专利中提供了一种生成对抗网络的稳定训练方法,包括:将训练图像输入自编码器中进行处理,得到第一生成图像;基于训练图像与第一生成图像之间的损失值,训练得到预训练的生成器;基于训练图像和经预训练的生成器生成的第二生成图像,对预训练的生成器和判别器进行训练,并基于训练结束时对应的生成器和判别器,得到生成对抗网络。The invention patent with the application number "201910287274.6" provides a stable training method for a generative adversarial network, including: inputting a training image into a self-encoder for processing to obtain a first generated image; based on the difference between the training image and the first generated image The loss value of the pre-trained generator is obtained by training; based on the training image and the second generated image generated by the pre-trained generator, the pre-trained generator and the discriminator are trained, and based on the corresponding generator at the end of the training and the discriminator to get a generative adversarial network.

另一申请号“201811461559.9”的发明专利中提供了一种生成式对抗网络装置及其训练方法。该生成式对抗网络装置包括生成网络和判别网络。生成网络配置为根据输入数据生成第一样本;判别网络与生成网络耦接,且配置为接收第一样本,并基于第一样本进行训练;生成网络包括第一忆阻器阵列作为第一权重值阵列。该生成式对抗网络装置可以省略对生成网络生成的假样本添加噪声的过程,从而节省训练时间,降低资源消耗,提高生成式对抗网络的训练速度。Another invention patent with the application number "201811461559.9" provides a generative confrontation network device and a training method thereof. The generative adversarial network device includes a generative network and a discriminant network. The generation network is configured to generate a first sample according to the input data; the discriminant network is coupled to the generation network and is configured to receive the first sample and perform training based on the first sample; the generation network includes a first memristor array as the first sample. An array of weight values. The generative confrontation network device can omit the process of adding noise to the fake samples generated by the generation network, thereby saving training time, reducing resource consumption and improving the training speed of the generative confrontation network.

综上所述,现有的生成对抗网络仍存在以下缺陷:To sum up, the existing generative adversarial networks still have the following defects:

1、难以训练、很不稳定。生成器和判别器之间很难达到一个良好的相互促进的状态,实际训练中,判别器很容易收敛,但生成器很难训练。1. Difficult to train and very unstable. It is difficult to achieve a good mutual promotion state between the generator and the discriminator. In actual training, the discriminator is easy to converge, but the generator is difficult to train.

2、模式崩塌。生成对抗网络在学习中很容易陷入模式崩塌的状态,即生成器总是生成类似的样本,无法继续学习。2. The mode collapses. Generative adversarial networks are prone to fall into the state of mode collapse during learning, that is, the generator always generates similar samples and cannot continue learning.

3、目前的生成对抗网络难以直接生成复杂的高维数据。目前的生成对抗网络较多的是应用于生成图片等维度较低的数据,但应用于生成复杂的高维数据时,以医学影像数据为例,由于医学影像数据转换算法在使用生成对抗网络时往往是以单张切片生成的方式,这样的方式会损失原始数据的结构信息,使生成的数据完整性大打折扣。3. The current generative adversarial network is difficult to directly generate complex high-dimensional data. The current generative adversarial network is mostly applied to generate low-dimensional data such as pictures, but when it is used to generate complex high-dimensional data, taking medical image data as an example, because the medical image data conversion algorithm uses the generative adversarial network when It is often generated by a single slice, which will lose the structural information of the original data and greatly reduce the integrity of the generated data.

发明内容SUMMARY OF THE INVENTION

本申请提供了一种针对高维数据的生成对抗网络训练方法、系统及电子设备,旨在至少在一定程度上解决现有技术中的上述技术问题之一。The present application provides a generative adversarial network training method, system and electronic device for high-dimensional data, aiming to solve one of the above technical problems in the prior art at least to a certain extent.

为了解决上述问题,本申请提供了如下技术方案:In order to solve the above problems, the application provides the following technical solutions:

一种针对高维数据的生成对抗网络训练方法,包括以下步骤:A generative adversarial network training method for high-dimensional data, comprising the following steps:

步骤a:搭建生成对抗网络骨干结构;Step a: Build the backbone structure of the generative adversarial network;

步骤b:使用张量火车分解算法对所述生成对抗网络骨干结构进行张量火车分解;Step b: use the tensor train decomposition algorithm to perform tensor train decomposition on the backbone structure of the generative adversarial network;

步骤c:使用真实高维数据训练一个基于张量火车分解的张量自编码器,通过所述张量自编码器输出真实高维数据的空间结构特征;Step c: use real high-dimensional data to train a tensor autoencoder based on tensor train decomposition, and output spatial structure features of real high-dimensional data through the tensor autoencoder;

步骤d:将所述张量自编码器的输出和判别器生成的最后一层特征相结合作为最后一层的输入,对生成对抗网络进行训练。Step d: The output of the tensor autoencoder and the features of the last layer generated by the discriminator are combined as the input of the last layer to train the generative adversarial network.

本申请实施例采取的技术方案还包括:在所述步骤a中,所述生成对抗网络骨干结构为基于3D卷积和3D反卷积的生成对抗网络骨干结构。The technical solutions adopted in the embodiments of the present application further include: in the step a, the GAN backbone structure is a GAN backbone structure based on 3D convolution and 3D deconvolution.

本申请实施例采取的技术方案还包括:在所述步骤b中,所述使用张量火车分解算法对所述生成对抗网络骨干结构进行张量火车分解具体为:将张量火车分解算法引入所述生成对抗网络骨干结构中所有的3D卷积和3D反卷积层,对所述3D卷积和3D反卷积层进行张量火车分解,得到3D-TT-Conv 层、3D-TT-deConv层。The technical solutions adopted in the embodiments of the present application further include: in the step b, the use of the tensor train decomposition algorithm to perform tensor train decomposition on the GAN backbone structure is specifically: introducing the tensor train decomposition algorithm into the All 3D convolution and 3D deconvolution layers in the backbone structure of the generative adversarial network are decomposed by tensor train to obtain 3D-TT-Conv layer, 3D-TT-deConv layer Floor.

本申请实施例采取的技术方案还包括:所述对3D卷积进行张量火车分解具体包括:The technical solution adopted in the embodiment of the present application further includes: the tensor train decomposition of the 3D convolution specifically includes:

假设输入三维数据维度为W×H×D,通道数为C,即输入张量:

Figure RE-GDA0002501828800000031
卷积核为:
Figure RE-GDA0002501828800000032
卷积后输出张量:
Figure RE-GDA0002501828800000033
Assuming that the input three-dimensional data dimension is W×H×D and the number of channels is C, that is, the input tensor:
Figure RE-GDA0002501828800000031
The convolution kernel is:
Figure RE-GDA0002501828800000032
Output tensor after convolution:
Figure RE-GDA0002501828800000033

将输出张量的每个元素写为:

Figure RE-GDA0002501828800000034
3D卷积的公式为:Write each element of the output tensor as:
Figure RE-GDA0002501828800000034
The formula for 3D convolution is:

Figure RE-GDA0002501828800000035
Figure RE-GDA0002501828800000035

将输出张量每个通道的维度表示为:Represent the dimension of each channel of the output tensor as:

H′=H-l+1H'=H-l+1

w′=W-l+1w'=W-l+1

D′=D-l+1D'=D-l+1

将输入张量

Figure RE-GDA0002501828800000041
转化为W′H′D′×l3C大小的矩阵,对应元素变换为:the input tensor
Figure RE-GDA0002501828800000041
Converted to a matrix of size W′H′D′×l 3 C, and the corresponding elements are transformed into:

Figure RE-GDA0002501828800000042
Figure RE-GDA0002501828800000042

将卷积核张量

Figure RE-GDA0002501828800000043
转换为大小为l3C×S的矩阵,对应元素变换为:convolution kernel tensor
Figure RE-GDA0002501828800000043
Converted to a matrix of size l 3 C×S, the corresponding elements are transformed as:

Figure RE-GDA0002501828800000044
Figure RE-GDA0002501828800000044

将输入矩阵X和卷积核矩阵K进行矩阵乘法,得到大小为W′H′D′×S 的输出矩阵Y,将输出矩阵Y还原为输出张量

Figure RE-GDA0002501828800000045
Perform matrix multiplication on the input matrix X and the convolution kernel matrix K to obtain an output matrix Y of size W'H'D'×S, and restore the output matrix Y to an output tensor
Figure RE-GDA0002501828800000045

Figure RE-GDA0002501828800000046
Figure RE-GDA0002501828800000046

将张量火车分解应用到卷积核矩阵K:将输入输出维度进行分解:

Figure RE-GDA0002501828800000047
将矩阵K张量化为张量
Figure RE-GDA0002501828800000048
对其进行张量火车分解:Apply the tensor train decomposition to the convolution kernel matrix K: decompose the input and output dimensions:
Figure RE-GDA0002501828800000047
Tensor the matrix K into a tensor
Figure RE-GDA0002501828800000048
Tensor train decomposition of it:

Figure RE-GDA0002501828800000049
Figure RE-GDA0002501828800000049

上述公式中:

Figure RE-GDA00025018288000000410
In the above formula:
Figure RE-GDA00025018288000000410

将输入张量

Figure RE-GDA00025018288000000411
转化为大小为W×H×D×C1×...×Cd的张量
Figure RE-GDA00025018288000000412
将该张量
Figure RE-GDA00025018288000000413
与卷积核的Tensor-Train矩阵进行运算,得到大小为 (W-l+1)×(H-l+1)×S1...×Sd输出张量:the input tensor
Figure RE-GDA00025018288000000411
Converted to a tensor of size W×H×D×C 1 ×...×C d
Figure RE-GDA00025018288000000412
the tensor
Figure RE-GDA00025018288000000413
Operate with the Tensor-Train matrix of the convolution kernel to obtain an output tensor of size (W-l+1)×(H-l+1)×S 1 ...×S d :

Figure RE-GDA00025018288000000414
Figure RE-GDA00025018288000000414

本申请实施例采取的技术方案还包括:在所述步骤c中,所述使用真实高维数据训练一个基于张量火车分解的张量自编码器具体为:将张量运算中的n-mode product运算引入自编码器中,使用张量运算中的n-mode product运算代替全连接层中输入向量和参数矩阵的matrix multiplication,直接对张量数据的维度进行放大和缩小,提取真实高维数据的空间结构特征。The technical solutions adopted in the embodiments of the present application further include: in the step c, the use of real high-dimensional data to train a tensor autoencoder based on tensor train decomposition is specifically: The product operation is introduced into the autoencoder, and the n-mode product operation in the tensor operation is used to replace the matrix multiplication of the input vector and parameter matrix in the fully connected layer, and the dimension of the tensor data is directly enlarged and reduced to extract the real high-dimensional data. the spatial structure characteristics.

本申请实施例采取的另一技术方案为:一种针对高维数据的生成对抗网络训练系统,包括:Another technical solution adopted by the embodiments of the present application is: a generative adversarial network training system for high-dimensional data, including:

网络骨干搭建模块:用于搭建生成对抗网络骨干结构;Network backbone building module: used to build the backbone structure of generative adversarial network;

张量火车分解模块:用于使用张量火车分解算法对所述生成对抗网络骨干结构进行张量火车分解;Tensor train decomposition module: used to perform tensor train decomposition on the backbone structure of the generative adversarial network using the tensor train decomposition algorithm;

张量自编码器训练模块:用于使用真实高维数据训练一个基于张量火车分解的张量自编码器,通过所述张量自编码器输出真实高维数据的空间结构特征;Tensor autoencoder training module: used to train a tensor autoencoder based on tensor train decomposition using real high-dimensional data, and output the spatial structure features of real high-dimensional data through the tensor autoencoder;

网络训练模块:用于将所述张量自编码器的输出和判别器生成的最后一层特征相结合作为最后一层的输入,对生成对抗网络进行训练。Network training module: used to combine the output of the tensor autoencoder and the features of the last layer generated by the discriminator as the input of the last layer to train the generative adversarial network.

本申请实施例采取的技术方案还包括:所述生成对抗网络骨干结构为基于3D卷积和3D反卷积的生成对抗网络骨干结构。The technical solutions adopted in the embodiments of the present application further include: the backbone structure of the generative confrontation network is a backbone structure of the generative confrontation network based on 3D convolution and 3D deconvolution.

本申请实施例采取的技术方案还包括:所述张量火车分解模块使用张量火车分解算法对所述生成对抗网络骨干结构进行张量火车分解具体为:将张量火车分解算法引入所述生成对抗网络骨干结构中所有的3D卷积和3D反卷积层,对所述3D卷积和3D反卷积层进行张量火车分解,得到3D-TT-Conv 层、3D-TT-deConv层。The technical solutions adopted in the embodiments of the present application further include: the tensor train decomposition module uses a tensor train decomposition algorithm to perform tensor train decomposition on the backbone structure of the generative adversarial network. Specifically, the tensor train decomposition algorithm is introduced into the generation All 3D convolution and 3D deconvolution layers in the backbone structure of the adversarial network are decomposed by tensor train to obtain 3D-TT-Conv layers and 3D-TT-deConv layers.

本申请实施例采取的技术方案还包括:所述对3D卷积进行张量火车分解具体包括:The technical solution adopted in the embodiment of the present application further includes: the tensor train decomposition of the 3D convolution specifically includes:

假设输入三维数据维度为W×H×D,通道数为C,即输入张量:

Figure RE-GDA0002501828800000061
卷积核为:
Figure RE-GDA0002501828800000062
卷积后输出张量:
Figure RE-GDA0002501828800000063
Assuming that the input three-dimensional data dimension is W×H×D and the number of channels is C, that is, the input tensor:
Figure RE-GDA0002501828800000061
The convolution kernel is:
Figure RE-GDA0002501828800000062
Output tensor after convolution:
Figure RE-GDA0002501828800000063

将输出张量的每个元素写为:

Figure RE-GDA0002501828800000064
3D卷积的公式为:Write each element of the output tensor as:
Figure RE-GDA0002501828800000064
The formula for 3D convolution is:

Figure RE-GDA0002501828800000065
Figure RE-GDA0002501828800000065

将输出张量每个通道的维度表示为:Represent the dimension of each channel of the output tensor as:

H′=H-l+1H'=H-l+1

W′=W-l+1W'=W-l+1

D′=D-l+1D'=D-l+1

将输入张量

Figure RE-GDA0002501828800000066
转化为W′H′D′×l3C大小的矩阵,对应元素变换为:the input tensor
Figure RE-GDA0002501828800000066
Converted to a matrix of size W′H′D′×l 3 C, and the corresponding elements are transformed into:

Figure RE-GDA0002501828800000067
Figure RE-GDA0002501828800000067

将卷积核张量

Figure RE-GDA0002501828800000068
转换为大小为l3C×S的矩阵,对应元素变换为:convolution kernel tensor
Figure RE-GDA0002501828800000068
Converted to a matrix of size l 3 C×S, the corresponding elements are transformed as:

Figure RE-GDA0002501828800000069
Figure RE-GDA0002501828800000069

将输入矩阵X和卷积核矩阵K进行矩阵乘法,得到大小为W′H′D′×S 的输出矩阵Y,将输出矩阵Y还原为输出张量

Figure RE-GDA00025018288000000610
Perform matrix multiplication on the input matrix X and the convolution kernel matrix K to obtain an output matrix Y of size W'H'D'×S, and restore the output matrix Y to an output tensor
Figure RE-GDA00025018288000000610

Figure RE-GDA00025018288000000611
Figure RE-GDA00025018288000000611

将张量火车分解应用到卷积核矩阵K:将输入输出维度进行分解:

Figure RE-GDA00025018288000000612
将矩阵K张量化为张量
Figure RE-GDA00025018288000000613
对其进行张量火车分解:Apply the tensor train decomposition to the convolution kernel matrix K: decompose the input and output dimensions:
Figure RE-GDA00025018288000000612
Tensor the matrix K into a tensor
Figure RE-GDA00025018288000000613
Tensor train decomposition of it:

Figure RE-GDA0002501828800000071
Figure RE-GDA0002501828800000071

上述公式中:

Figure RE-GDA0002501828800000072
In the above formula:
Figure RE-GDA0002501828800000072

将输入张量

Figure RE-GDA0002501828800000073
转化为大小为W×H×D×C1×...×Cd的张量
Figure RE-GDA0002501828800000074
将该张量
Figure RE-GDA0002501828800000075
与卷积核的Tensor-Train矩阵进行运算,得到大小为 (W-l+1)×(H-l+1)×S1...×Sd输出张量:the input tensor
Figure RE-GDA0002501828800000073
Converted to a tensor of size W×H×D×C 1 ×...×C d
Figure RE-GDA0002501828800000074
the tensor
Figure RE-GDA0002501828800000075
Operate with the Tensor-Train matrix of the convolution kernel to obtain an output tensor of size (W-l+1)×(H-l+1)×S 1 ...×S d :

Figure RE-GDA0002501828800000076
Figure RE-GDA0002501828800000076

本申请实施例采取的技术方案还包括:所述张量自编码器训练模块使用真实高维数据训练一个基于张量火车分解的张量自编码器具体为:将张量运算中的n-mode product运算引入自编码器中,使用张量运算中的n-mode product运算代替全连接层中输入向量和参数矩阵的matrix multiplication,直接对张量数据的维度进行放大和缩小,提取真实高维数据的空间结构特征。The technical solutions adopted in the embodiments of the present application further include: the tensor autoencoder training module uses real high-dimensional data to train a tensor train decomposition-based tensor autoencoder. Specifically: The product operation is introduced into the autoencoder, and the n-mode product operation in the tensor operation is used to replace the matrix multiplication of the input vector and parameter matrix in the fully connected layer, and the dimension of the tensor data is directly enlarged and reduced to extract the real high-dimensional data. the spatial structure characteristics.

本申请实施例采取的又一技术方案为:一种电子设备,包括:Another technical solution adopted in the embodiment of the present application is: an electronic device, comprising:

至少一个处理器;以及at least one processor; and

与所述至少一个处理器通信连接的存储器;其中,a memory communicatively coupled to the at least one processor; wherein,

所述存储器存储有可被所述一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行上述的针对高维数据的生成对抗网络训练方法的以下操作:The memory stores instructions executable by the one processor, the instructions are executed by the at least one processor, so that the at least one processor can perform the above-mentioned generative adversarial network training method for high-dimensional data the following operations:

步骤a:搭建生成对抗网络骨干结构;Step a: Build the backbone structure of the generative adversarial network;

步骤b:使用张量火车分解算法对所述生成对抗网络骨干结构进行张量火车分解;Step b: use the tensor train decomposition algorithm to perform tensor train decomposition on the backbone structure of the generative adversarial network;

步骤c:使用真实高维数据训练一个基于张量火车分解的张量自编码器,通过所述张量自编码器输出真实高维数据的空间结构特征;Step c: use real high-dimensional data to train a tensor autoencoder based on tensor train decomposition, and output spatial structure features of real high-dimensional data through the tensor autoencoder;

步骤d:将所述张量自编码器的输出和判别器生成的最后一层特征相结合作为最后一层的输入,对生成对抗网络进行训练。Step d: The output of the tensor autoencoder and the features of the last layer generated by the discriminator are combined as the input of the last layer to train the generative adversarial network.

相对于现有技术,本申请实施例产生的有益效果在于:本申请实施例的针对高维数据的生成对抗网络训练方法、系统及电子设备通过使用张量火车分解将生成对抗网络中的3D卷积进行张量化,使生成对抗网络具备直接生成高维数据的能力,同时减少网络的参数量,使网络的训练更加稳定。同时,通过将张量运算引入自编码器的训练过程中,得到张量化的编码器,通过张量化的编码器输出真实高维数据的空间结构特征,并将真实高维数据的空间结构特征引入生成对抗网络的训练中,使得生成器生成更加真实和多样化的数据,提高生成数据的质量和多样性。Compared with the prior art, the beneficial effects of the embodiments of the present application are: the GAN training method, system and electronic device for high-dimensional data in the embodiments of the present application decompose the 3D volumes in the GAN by using tensor train decomposition. The product is tensorized, so that the generative adversarial network has the ability to directly generate high-dimensional data, and at the same time reduces the amount of parameters of the network, making the training of the network more stable. At the same time, by introducing tensor operations into the training process of the self-encoder, a tensorized encoder is obtained, the spatial structure features of the real high-dimensional data are output through the tensorized encoder, and the spatial structure features of the real high-dimensional data are introduced. In the training of generative adversarial network, the generator generates more realistic and diverse data, and improves the quality and diversity of the generated data.

附图说明Description of drawings

图1是本申请实施例的针对高维数据的生成对抗网络训练方法的流程图;1 is a flowchart of a generative adversarial network training method for high-dimensional data according to an embodiment of the present application;

图2为本申请实施例的基于张量自编码器和张量火车分解的生成对抗网络结构图;2 is a structural diagram of a generative adversarial network based on tensor autoencoder and tensor train decomposition according to an embodiment of the present application;

图3是本申请实施例的针对高维数据的生成对抗网络训练系统的结构示意图;3 is a schematic structural diagram of a generative adversarial network training system for high-dimensional data according to an embodiment of the present application;

图4是本申请实施例提供的针对高维数据的生成对抗网络训练方法的硬件设备结构示意图。FIG. 4 is a schematic structural diagram of a hardware device of a generative adversarial network training method for high-dimensional data provided by an embodiment of the present application.

具体实施方式Detailed ways

为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。In order to make the purpose, technical solutions and advantages of the present application more clearly understood, the present application will be described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are only used to explain the present application, but not to limit the present application.

为了解决现有技术的不足,本申请实施例的针对高维数据的生成对抗网络训练方法使用张量火车分解技术和张量自编码器来改进原始的生成对抗网络。通过将张量运算引入生成对抗网络的网络结构中,将3D卷积进行张量化分解,利用张量对输入信息结构信息的保留来增加生成对抗网络训练的稳定性。通过使用张量自编码器,将真实分布的信息引入生成对抗网络,并可以同时从正向、反向同时优化真实分布和生成分布之间的KL距离,从而提高生成对抗网络直接生成高维数据的能力,并且增加生成数据的多样性。In order to solve the insufficiency of the prior art, the generative adversarial network training method for high-dimensional data in the embodiments of the present application uses tensor train decomposition technology and tensor autoencoder to improve the original generative adversarial network. By introducing the tensor operation into the network structure of the generative adversarial network, the 3D convolution is tensorized and decomposed, and the tensor is used to preserve the structural information of the input information to increase the stability of the generative adversarial network training. By using the tensor autoencoder, the information of the true distribution is introduced into the generative adversarial network, and the KL distance between the true distribution and the generative distribution can be simultaneously optimized from the forward and reverse directions, thereby improving the generative adversarial network to directly generate high-dimensional data capabilities, and increase the variety of generated data.

具体的,请参阅图1,是本申请实施例的针对高维数据的生成对抗网络训练方法的流程图。本申请实施例的针对高维数据的生成对抗网络训练方法包括以下步骤:Specifically, please refer to FIG. 1 , which is a flowchart of a generative adversarial network training method for high-dimensional data according to an embodiment of the present application. The generative adversarial network training method for high-dimensional data according to the embodiment of the present application includes the following steps:

步骤100:搭建基于3D卷积和3D反卷积的生成对抗网络骨干结构;Step 100: Build a generative adversarial network backbone structure based on 3D convolution and 3D deconvolution;

步骤200:将张量火车分解算法引入生成对抗网络骨干结构中所有的3D 卷积和3D反卷积层,使用张量火车分解算法对生成对抗网络中的3D卷积和 3D反卷积层进行张量火车分解;Step 200: Introduce the tensor train decomposition algorithm into all 3D convolution and 3D deconvolution layers in the backbone structure of the generative adversarial network, and use the tensor train decomposition algorithm to perform the 3D convolution and 3D deconvolution layers in the generative adversarial network. tensor train decomposition;

步骤200中,张量火车分解算法的原理是将一个高维张量中的每一个元素用若干个矩阵连乘的形式来表达。即:In step 200, the principle of the tensor train decomposition algorithm is to express each element in a high-dimensional tensor in the form of several matrix multiplications. which is:

A(i1,i2,...,id)=G1(i1)G2(i2)...Gd(id) (1)A(i 1 ,i 2 ,...,id )=G 1 (i 1 )G 2 (i 2 )...G d ( id ) (1)

公式(1)中,Gk(ik)是一个rk-1×rk大小的矩阵,rk表示张量火车分解的秩 (TT-ranks),为了确保最终结果是一个标量,r0=rk=1。In formula (1), G k (i k ) is a matrix of size r k-1 ×r k , and r k represents the rank (TT-ranks) of the tensor train decomposition. To ensure that the final result is a scalar, r 0 =r k =1.

本发明使用张量火车分解算法对生成对抗网络中的3D-Conv(3D卷积)、 3D-deConv(3D反卷积)进行张量火车分解,得到3D-TT-Conv层、3D-TT- deConv层。以下实施例中,以3D-TT-Conv层为例从公式和实例方面上对张量火车分解的应用进行具体描述。The present invention uses the tensor train decomposition algorithm to perform tensor train decomposition on 3D-Conv (3D convolution) and 3D-deConv (3D deconvolution) in the generative adversarial network, and obtains 3D-TT-Conv layer, 3D-TT- deConv layer. In the following embodiments, the 3D-TT-Conv layer is taken as an example to describe the application of tensor train decomposition in terms of formulas and examples.

3D-TT-Conv层的张量火车分解运算公式:The tensor train decomposition formula of the 3D-TT-Conv layer:

1、首先对传统的3D卷积公式进行分析:1. First, analyze the traditional 3D convolution formula:

输入三维数据维度为W×H×D,通道数为C,即输入张量:

Figure RE-GDA0002501828800000101
卷积核为:
Figure RE-GDA0002501828800000102
那么卷积后输出张量:
Figure RE-GDA0002501828800000103
The input three-dimensional data dimension is W×H×D, and the number of channels is C, that is, the input tensor:
Figure RE-GDA0002501828800000101
The convolution kernel is:
Figure RE-GDA0002501828800000102
Then the output tensor after convolution:
Figure RE-GDA0002501828800000103

为了方便公式表示将输出张量的每个元素写为:

Figure RE-GDA0002501828800000104
3D卷积的公式可写为:To facilitate the formulation, write each element of the output tensor as:
Figure RE-GDA0002501828800000104
The formula for 3D convolution can be written as:

Figure RE-GDA0002501828800000105
Figure RE-GDA0002501828800000105

2、卷积操作矩阵化:2. The convolution operation is matrixed:

为了将3D卷积张量化,首先需要将3D卷积操作矩阵化,然后对卷积核矩阵进行张量火车分解。具体步骤包括:In order to tensor the 3D convolution, the 3D convolution operation needs to be matrixed first, and then the tensor train decomposition of the convolution kernel matrix is performed. Specific steps include:

(1)将输出张量每个通道的维度表示为:(1) Express the dimension of each channel of the output tensor as:

H′=H-l+1H'=H-l+1

w′=W-l+lw'=W-l+l

D′=D-l+1 (3)D′=D-1+1 (3)

将输入张量

Figure RE-GDA0002501828800000111
转化为W′H′D′×l3C大小的矩阵,对应元素变换为:the input tensor
Figure RE-GDA0002501828800000111
Converted to a matrix of size W′H′D′×l 3 C, and the corresponding elements are transformed into:

Figure RE-GDA0002501828800000112
Figure RE-GDA0002501828800000112

(2)将卷积核张量

Figure RE-GDA0002501828800000113
转换为大小为l3C×S的矩阵,对应元素变换为:(2) The convolution kernel tensor
Figure RE-GDA0002501828800000113
Converted to a matrix of size l 3 C×S, the corresponding elements are transformed as:

Figure RE-GDA0002501828800000114
Figure RE-GDA0002501828800000114

(3)将输入矩阵X和卷积核矩阵K进行矩阵乘法,得到大小为 W′H′D′×S的输出矩阵Y,将输出矩阵Y还原为输出张量

Figure RE-GDA0002501828800000115
(3) Perform matrix multiplication on the input matrix X and the convolution kernel matrix K to obtain an output matrix Y of size W'H'D'×S, and restore the output matrix Y to an output tensor
Figure RE-GDA0002501828800000115

Figure RE-GDA0002501828800000116
Figure RE-GDA0002501828800000116

(4)将张量火车分解应用到卷积核矩阵K:将输入输出维度进行分解:

Figure RE-GDA0002501828800000117
将矩阵K张量化为张量
Figure RE-GDA0002501828800000118
对其进行张量火车分解,则卷积核矩阵K的TT分解过程为:(4) Apply the tensor train decomposition to the convolution kernel matrix K: decompose the input and output dimensions:
Figure RE-GDA0002501828800000117
Tensor the matrix K into a tensor
Figure RE-GDA0002501828800000118
Perform tensor train decomposition on it, then the TT decomposition process of the convolution kernel matrix K is:

Figure RE-GDA0002501828800000119
Figure RE-GDA0002501828800000119

公式(7)中:

Figure RE-GDA00025018288000001110
In formula (7):
Figure RE-GDA00025018288000001110

(5)为了匹配TT分解后的卷积核,将输入张量

Figure RE-GDA00025018288000001111
转化为大小为W×H×D×C1×...×Cd的张量
Figure RE-GDA00025018288000001112
将该张量
Figure RE-GDA00025018288000001113
与卷积核的Tensor- Train矩阵进行运算,得到大小为(W-l+1)×(H-l+1)×S1...×Sd输出张量:(5) In order to match the convolution kernel after TT decomposition, the input tensor
Figure RE-GDA00025018288000001111
Converted to a tensor of size W×H×D×C 1 ×...×C d
Figure RE-GDA00025018288000001112
the tensor
Figure RE-GDA00025018288000001113
Operates with the Tensor-Train matrix of the convolution kernel to obtain an output tensor of size (W-l+1)×(H-l+1)×S 1 ...×S d :

Figure RE-GDA0002501828800000121
Figure RE-GDA0002501828800000121

上式即为最终的3D-TT-Conv层张量火车分解运算公式。The above formula is the final 3D-TT-Conv layer tensor train decomposition operation formula.

3D-TT-deConv层的张量火车分解运算公式推导过程与3D-TT-Conv层相同,此处将不再赘述。The derivation process of the tensor train decomposition operation formula of the 3D-TT-deConv layer is the same as that of the 3D-TT-Conv layer, and will not be repeated here.

步骤300:使用真实高维数据训练一个基于张量火车分解的张量自编码器(TT-Encoder),通过张量自编码器输出真实高维数据的空间结构特征;Step 300: use real high-dimensional data to train a tensor autoencoder (TT-Encoder) based on tensor train decomposition, and output the spatial structure features of real high-dimensional data through the tensor autoencoder;

步骤300中,本发明将张量运算中的n-mode product运算引入自编码器中,使用张量运算中的n-mode product运算来代替全连接层中输入向量和参数矩阵的matrixmultiplication,可以直接对张量数据的维度进行放大和缩小,即自编码器中的升维和降维操作,可以将高维的真实高维数据编码至低维度数据。由于张量运算不存在将高维数据向量化的操作,可以提取高维数据的空间结构特征。In step 300, the present invention introduces the n-mode product operation in the tensor operation into the self-encoder, and uses the n-mode product operation in the tensor operation to replace the matrix multiplication of the input vector and the parameter matrix in the fully connected layer, which can be directly Enlarging and reducing the dimension of tensor data, that is, the dimension raising and reduction operations in the autoencoder, can encode high-dimensional real high-dimensional data to low-dimensional data. Since there is no operation to vectorize high-dimensional data in tensor operations, the spatial structure features of high-dimensional data can be extracted.

例如:

Figure RE-GDA0002501828800000122
E.g:
Figure RE-GDA0002501828800000122

Figure RE-GDA0002501828800000123
Figure RE-GDA0002501828800000123

步骤400:将张量自编码器的输出和判别器生成的最后一层特征结合起来作为最后一层的输入,对生成对抗网络进行训练,得到基于张量自编码器和张量火车分解的生成对抗网络;Step 400: Combine the output of the tensor autoencoder and the features of the last layer generated by the discriminator as the input of the last layer, train the generative adversarial network, and obtain the generation based on the tensor autoencoder and the tensor train decomposition. adversarial network;

步骤400中,本发明通过将编码后得到的真实高维数据的空间结构特征和判别器最后一层特征相结合,可以将真实高维数据的分布引入生成对抗网络的训练过程中,实现同时从正向、反向同时优化真实分布和生成分布之间的KL距离(Kullback-LeiblerDivergence,相对熵),从而指导生成器生成更加真实和多样化的数据,提高生成数据的质量和多样性。生成对抗网络的训练过程和原始生成对抗网络的训练方式一致,此处将不再赘述。In step 400, the present invention can introduce the distribution of the real high-dimensional data into the training process of the generative confrontation network by combining the spatial structure features of the real high-dimensional data obtained after encoding with the features of the last layer of the discriminator, so as to achieve simultaneous The KL distance (Kullback-Leibler Divergence, relative entropy) between the real distribution and the generated distribution is simultaneously optimized in the forward and reverse directions, thereby guiding the generator to generate more realistic and diverse data, and improving the quality and diversity of the generated data. The training process of the generative adversarial network is the same as the training method of the original generative adversarial network, and will not be repeated here.

基于张量自编码器和张量火车分解的生成对抗网络结构如图2所示,图中z为随机噪声,TG为张量火车分解的3D生成器,TD为张量火车分解的3D 判别器,TE为Tensor-Encoder(张量自编码器),用于直接提取真实高维数据的空间结构特征f,h为判别器最后一层特征。The structure of generative adversarial network based on tensor autoencoder and tensor train decomposition is shown in Figure 2. In the figure, z is random noise, TG is the 3D generator of tensor train decomposition, and TD is the 3D discriminator of tensor train decomposition. , TE is Tensor-Encoder (tensor self-encoder), which is used to directly extract the spatial structure feature f of real high-dimensional data, and h is the last layer feature of the discriminator.

请参阅图3,是本申请实施例的针对高维数据的生成对抗网络训练系统的结构示意图。本申请实施例的针对高维数据的生成对抗网络训练系统包括网络骨干搭建模块、张量火车分解模块、张量自编码器训练模块、网络训练模块。Please refer to FIG. 3 , which is a schematic structural diagram of a generative adversarial network training system for high-dimensional data according to an embodiment of the present application. The generative adversarial network training system for high-dimensional data in the embodiment of the present application includes a network backbone building module, a tensor train decomposition module, a tensor autoencoder training module, and a network training module.

网络骨干搭建模块:用于搭建基于3D卷积和3D反卷积的生成对抗网络骨干结构;Network backbone building module: used to build the backbone structure of generative adversarial network based on 3D convolution and 3D deconvolution;

张量火车分解模块:用于将张量火车分解算法引入生成对抗网络骨干结构中所有的3D卷积和3D反卷积层,使用张量火车分解算法对生成对抗网络中的3D卷积和3D反卷积层进行张量火车分解;其中,张量火车分解算法的原理是将一个高维张量中的每一个元素用若干个矩阵连乘的形式来表达。Tensor train decomposition module: used to introduce the tensor train decomposition algorithm into all 3D convolution and 3D deconvolution layers in the backbone structure of the generative adversarial network, and use the tensor train decomposition algorithm to analyze the 3D convolution and 3D The deconvolution layer performs tensor train decomposition; the principle of the tensor train decomposition algorithm is to express each element in a high-dimensional tensor in the form of several matrices.

即:which is:

A(i1,i2,...,id)=G1(i1)G2(i2)...Gd(id) (1)A(i 1 ,i 2 ,...,id )=G 1 (i 1 )G 2 (i 2 )...G d ( id ) (1)

公式(1)中,Gk(ik)是一个rk-1×rk大小的矩阵,rk表示张量火车分解的秩 (TT-ranks),为了确保最终结果是一个标量,r0=rk=1。In formula (1), G k (i k ) is a matrix of size r k-1 ×r k , and r k represents the rank (TT-ranks) of the tensor train decomposition. To ensure that the final result is a scalar, r 0 =r k =1.

本发明使用张量火车分解算法对生成对抗网络中的3D-Conv(3D卷积)、3D-deConv(3D反卷积)进行张量火车分解,得到3D-TT-Conv层、3D-TT- deConv层。以下实施例中,以3D-TT-Conv层为例从公式和实例方面上对张量火车分解的应用进行具体描述。The invention uses the tensor train decomposition algorithm to decompose the tensor train of 3D-Conv (3D convolution) and 3D-deConv (3D deconvolution) in the generative confrontation network, and obtains 3D-TT-Conv layer, 3D-TT- deConv layer. In the following embodiments, the 3D-TT-Conv layer is taken as an example to describe the application of tensor train decomposition in terms of formulas and examples.

3D-TT-Conv层的张量火车分解运算公式:The tensor train decomposition formula of the 3D-TT-Conv layer:

1、首先对传统的3D卷积公式进行分析:1. First, analyze the traditional 3D convolution formula:

输入三维数据维度为W×H×D,通道数为C,即输入张量:

Figure RE-GDA0002501828800000141
卷积核为:
Figure RE-GDA0002501828800000142
那么卷积后输出张量:
Figure RE-GDA0002501828800000143
The input three-dimensional data dimension is W×H×D, and the number of channels is C, that is, the input tensor:
Figure RE-GDA0002501828800000141
The convolution kernel is:
Figure RE-GDA0002501828800000142
Then the output tensor after convolution:
Figure RE-GDA0002501828800000143

为了方便公式表示将输出张量的每个元素写为:

Figure RE-GDA0002501828800000144
3D卷积的公式可写为:To facilitate the formulation, write each element of the output tensor as:
Figure RE-GDA0002501828800000144
The formula for 3D convolution can be written as:

Figure RE-GDA0002501828800000145
Figure RE-GDA0002501828800000145

2、卷积操作矩阵化:2. The convolution operation is matrixed:

为了将3D卷积张量化,首先需要将3D卷积操作矩阵化,然后对卷积核矩阵进行张量火车分解。具体步骤包括:In order to tensor the 3D convolution, the 3D convolution operation needs to be matrixed first, and then the tensor train decomposition of the convolution kernel matrix is performed. Specific steps include:

(1)将输出张量每个通道的维度表示为:(1) Express the dimension of each channel of the output tensor as:

H′=H-l+1H'=H-l+1

w′=W-l+1w'=W-l+1

D′=D-l+1 (3)D′=D-1+1 (3)

将输入张量

Figure RE-GDA0002501828800000146
转化为W′H′D′×l3C大小的矩阵,对应元素变换为:the input tensor
Figure RE-GDA0002501828800000146
Converted to a matrix of size W′H′D′×l 3 C, and the corresponding elements are transformed into:

Figure RE-GDA0002501828800000147
Figure RE-GDA0002501828800000147

(2)将卷积核张量

Figure RE-GDA0002501828800000151
转换为大小为l3C×S的矩阵,对应元素变换为:(2) The convolution kernel tensor
Figure RE-GDA0002501828800000151
Converted to a matrix of size l 3 C×S, the corresponding elements are transformed as:

Figure RE-GDA0002501828800000152
Figure RE-GDA0002501828800000152

(3)将输入矩阵X和卷积核矩阵K进行矩阵乘法,得到大小为 W′H′D′×S的输出矩阵Y,将输出矩阵Y还原为输出张量

Figure RE-GDA0002501828800000153
(3) Perform matrix multiplication on the input matrix X and the convolution kernel matrix K to obtain an output matrix Y of size W'H'D'×S, and restore the output matrix Y to an output tensor
Figure RE-GDA0002501828800000153

Figure RE-GDA0002501828800000154
Figure RE-GDA0002501828800000154

(4)将张量火车分解应用到卷积核矩阵K:(4) Apply the tensor train decomposition to the convolution kernel matrix K:

将输入输出维度进行分解:

Figure RE-GDA0002501828800000155
将矩阵K张量化为张量
Figure RE-GDA0002501828800000156
对其进行张量火车分解,则卷积核矩阵K的TT分解过程为:Decompose the input and output dimensions:
Figure RE-GDA0002501828800000155
Tensor the matrix K into a tensor
Figure RE-GDA0002501828800000156
Perform tensor train decomposition on it, then the TT decomposition process of the convolution kernel matrix K is:

Figure RE-GDA0002501828800000157
Figure RE-GDA0002501828800000157

公式(7)中:

Figure RE-GDA0002501828800000158
In formula (7):
Figure RE-GDA0002501828800000158

(5)为了匹配TT分解后的卷积核,将输入张量

Figure RE-GDA0002501828800000159
转化为大小为W×H×D×C1×...×Cd的张量
Figure RE-GDA00025018288000001510
将该张量
Figure RE-GDA00025018288000001511
与卷积核的Tensor- Train矩阵进行运算,得到大小为(W-l+1)×(H-l+1)×S1...×Sd输出张量:(5) In order to match the convolution kernel after TT decomposition, the input tensor
Figure RE-GDA0002501828800000159
Converted to a tensor of size W×H×D×C 1 ×...×C d
Figure RE-GDA00025018288000001510
the tensor
Figure RE-GDA00025018288000001511
Operates with the Tensor-Train matrix of the convolution kernel to obtain an output tensor of size (W-l+1)×(H-l+1)×S 1 ...×S d :

Figure RE-GDA00025018288000001512
Figure RE-GDA00025018288000001512

上式即为最终的3D-TT-Conv层张量火车分解运算公式。The above formula is the final 3D-TT-Conv layer tensor train decomposition operation formula.

张量自编码器训练模块:用于使用真实高维数据训练一个基于张量火车分解的张量自编码器(TT-Encoder),通过张量自编码器输出真实高维数据的空间结构特征;本发明将张量运算中的n-mode product运算引入自编码器中,使用张量运算中的n-mode product运算来代替全连接层中输入向量和参数矩阵的matrix multiplication,可以直接对张量数据的维度进行放大和缩小,即自编码器中的升维和降维操作,可以将高维的真实高维数据编码至低维度数据。由于张量运算不存在将高维数据向量化的操作,可以提取高维数据的空间结构特征。Tensor autoencoder training module: used to train a tensor autoencoder (TT-Encoder) based on tensor train decomposition using real high-dimensional data, and output the spatial structure features of real high-dimensional data through the tensor autoencoder; The invention introduces the n-mode product operation in the tensor operation into the self-encoder, uses the n-mode product operation in the tensor operation to replace the matrix multiplication of the input vector and the parameter matrix in the full connection layer, and can directly perform the tensor operation. The dimension of the data is enlarged and reduced, that is, the dimension-raising and dimension-reducing operations in the autoencoder can encode high-dimensional real high-dimensional data to low-dimensional data. Since there is no operation to vectorize high-dimensional data in tensor operations, the spatial structure features of high-dimensional data can be extracted.

网络训练模块:用于将张量自编码器的输出和判别器生成的最后一层特征结合起来作为最后一层的输入,对生成对抗网络进行训练,得到基于张量自编码器和张量火车分解的生成对抗网络;其中,本发明通过将编码后得到的真实高维数据的空间结构特征和判别器最后一层特征相结合,可以将真实高维数据的分布引入生成对抗网络的训练过程中,实现同时从正向、反向同时优化真实分布和生成分布之间的KL距离(Kullback-LeiblerDivergence,相对熵),从而指导生成器生成更加真实和多样化的数据,提高生成数据的质量和多样性。Network training module: It is used to combine the output of the tensor auto-encoder and the last layer of features generated by the discriminator as the input of the last layer to train the generative adversarial network to obtain a tensor based auto-encoder and tensor train. Decomposed generative adversarial network; wherein, the present invention can introduce the distribution of real high-dimensional data into the training process of the generative adversarial network by combining the spatial structure features of the real high-dimensional data obtained after encoding with the features of the last layer of the discriminator , to simultaneously optimize the KL distance (Kullback-Leibler Divergence, relative entropy) between the real distribution and the generated distribution from the forward and reverse directions, thereby guiding the generator to generate more realistic and diverse data, and improving the quality and diversity of the generated data sex.

图4是本申请实施例提供的针对高维数据的生成对抗网络训练方法的硬件设备结构示意图。如图4所示,该设备包括一个或多个处理器以及存储器。以一个处理器为例,该设备还可以包括:输入系统和输出系统。FIG. 4 is a schematic structural diagram of a hardware device of a generative adversarial network training method for high-dimensional data provided by an embodiment of the present application. As shown in Figure 4, the device includes one or more processors and memory. Taking a processor as an example, the device may further include: an input system and an output system.

处理器、存储器、输入系统和输出系统可以通过总线或者其他方式连接,图4中以通过总线连接为例。The processor, the memory, the input system, and the output system may be connected by a bus or in other ways, and the connection by a bus is taken as an example in FIG. 4 .

存储器作为一种非暂态计算机可读存储介质,可用于存储非暂态软件程序、非暂态计算机可执行程序以及模块。处理器通过运行存储在存储器中的非暂态软件程序、指令以及模块,从而执行电子设备的各种功能应用以及数据处理,即实现上述方法实施例的处理方法。As a non-transitory computer-readable storage medium, the memory can be used to store non-transitory software programs, non-transitory computer-executable programs, and modules. The processor executes various functional applications and data processing of the electronic device by running the non-transitory software programs, instructions and modules stored in the memory, that is, the processing method of the above method embodiment is implemented.

存储器可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储数据等。此外,存储器可以包括高速随机存取存储器,还可以包括非暂态存储器,例如至少一个磁盘存储器件、闪存器件、或其他非暂态固态存储器件。在一些实施例中,存储器可选包括相对于处理器远程设置的存储器,这些远程存储器可以通过网络连接至处理系统。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。The memory may include a stored program area and a stored data area, wherein the stored program area can store an operating system and an application program required by at least one function; the stored data area can store data and the like. Additionally, the memory may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory may optionally include memory located remotely from the processor, which may be connected to the processing system via a network. Examples of such networks include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.

输入系统可接收输入的数字或字符信息,以及产生信号输入。输出系统可包括显示屏等显示设备。The input system can receive input numerical or character information and generate signal input. The output system may include a display device such as a display screen.

所述一个或者多个模块存储在所述存储器中,当被所述一个或者多个处理器执行时,执行上述任一方法实施例的以下操作:The one or more modules are stored in the memory, and when executed by the one or more processors, perform the following operations of any of the foregoing method embodiments:

步骤a:搭建生成对抗网络骨干结构;Step a: Build the backbone structure of the generative adversarial network;

步骤b:使用张量火车分解算法对所述生成对抗网络骨干结构进行张量火车分解;Step b: use the tensor train decomposition algorithm to perform tensor train decomposition on the backbone structure of the generative adversarial network;

步骤c:使用真实高维数据训练一个基于张量火车分解的张量自编码器,通过所述张量自编码器输出真实高维数据的空间结构特征;Step c: use real high-dimensional data to train a tensor autoencoder based on tensor train decomposition, and output spatial structure features of real high-dimensional data through the tensor autoencoder;

步骤d:将所述张量自编码器的输出和判别器生成的最后一层特征相结合作为最后一层的输入,对生成对抗网络进行训练。Step d: The output of the tensor autoencoder and the features of the last layer generated by the discriminator are combined as the input of the last layer to train the generative adversarial network.

上述产品可执行本申请实施例所提供的方法,具备执行方法相应的功能模块和有益效果。未在本实施例中详尽描述的技术细节,可参见本申请实施例提供的方法。The above product can execute the method provided by the embodiments of the present application, and has functional modules and beneficial effects corresponding to the execution method. For technical details not described in detail in this embodiment, reference may be made to the method provided in this embodiment of the present application.

本申请实施例提供了一种非暂态(非易失性)计算机存储介质,所述计算机存储介质存储有计算机可执行指令,该计算机可执行指令可执行以下操作:An embodiment of the present application provides a non-transitory (non-volatile) computer storage medium, where the computer storage medium stores computer-executable instructions, and the computer-executable instructions can perform the following operations:

步骤a:搭建生成对抗网络骨干结构;Step a: Build the backbone structure of the generative adversarial network;

步骤b:使用张量火车分解算法对所述生成对抗网络骨干结构进行张量火车分解;Step b: use the tensor train decomposition algorithm to perform tensor train decomposition on the backbone structure of the generative adversarial network;

步骤c:使用真实高维数据训练一个基于张量火车分解的张量自编码器,通过所述张量自编码器输出真实高维数据的空间结构特征;Step c: use real high-dimensional data to train a tensor autoencoder based on tensor train decomposition, and output spatial structure features of real high-dimensional data through the tensor autoencoder;

步骤d:将所述张量自编码器的输出和判别器生成的最后一层特征相结合作为最后一层的输入,对生成对抗网络进行训练。Step d: The output of the tensor autoencoder and the features of the last layer generated by the discriminator are combined as the input of the last layer to train the generative adversarial network.

本申请实施例提供了一种计算机程序产品,所述计算机程序产品包括存储在非暂态计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当所述程序指令被计算机执行时,使所述计算机执行以下操作:An embodiment of the present application provides a computer program product, the computer program product includes a computer program stored on a non-transitory computer-readable storage medium, the computer program includes program instructions, and when the program instructions are executed by a computer , which causes the computer to do the following:

步骤a:搭建生成对抗网络骨干结构;Step a: Build the backbone structure of the generative adversarial network;

步骤b:使用张量火车分解算法对所述生成对抗网络骨干结构进行张量火车分解;Step b: use the tensor train decomposition algorithm to perform tensor train decomposition on the backbone structure of the generative adversarial network;

步骤c:使用真实高维数据训练一个基于张量火车分解的张量自编码器,通过所述张量自编码器输出真实高维数据的空间结构特征;Step c: use real high-dimensional data to train a tensor autoencoder based on tensor train decomposition, and output spatial structure features of real high-dimensional data through the tensor autoencoder;

步骤d:将所述张量自编码器的输出和判别器生成的最后一层特征相结合作为最后一层的输入,对生成对抗网络进行训练。Step d: The output of the tensor autoencoder and the features of the last layer generated by the discriminator are combined as the input of the last layer to train the generative adversarial network.

对所公开的实施例的上述说明,使本领域专业技术人员能够实现或使用本申请。对这些实施例的多种修改对本领域的专业技术人员来说将是显而易见的,本申请中所定义的一般原理可以在不脱离本申请的精神或范围的情况下,在其它实施例中实现。因此,本申请将不会被限制于本申请所示的这些实施例,而是要符合与本申请所公开的原理和新颖特点相一致的最宽的范围。The above description of the disclosed embodiments enables any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined in this application may be implemented in other embodiments without departing from the spirit or scope of this application. Therefore, this application is not to be limited to the embodiments shown herein, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (11)

1.一种针对高维数据的生成对抗网络训练方法,其特征在于,包括以下步骤:1. a method for generating adversarial network training for high-dimensional data, is characterized in that, comprises the following steps: 步骤a:搭建生成对抗网络骨干结构;Step a: Build the backbone structure of the generative adversarial network; 步骤b:使用张量火车分解算法对所述生成对抗网络骨干结构进行张量火车分解;Step b: use the tensor train decomposition algorithm to perform tensor train decomposition on the backbone structure of the generative adversarial network; 步骤c:使用真实高维数据训练一个基于张量火车分解的张量自编码器,通过所述张量自编码器输出真实高维数据的空间结构特征;Step c: use real high-dimensional data to train a tensor autoencoder based on tensor train decomposition, and output spatial structure features of real high-dimensional data through the tensor autoencoder; 步骤d:将所述张量自编码器的输出和判别器生成的最后一层特征相结合作为最后一层的输入,对生成对抗网络进行训练。Step d: The output of the tensor autoencoder and the features of the last layer generated by the discriminator are combined as the input of the last layer to train the generative adversarial network. 2.根据权利要求1所述的针对高维数据的生成对抗网络训练方法,其特征在于,在所述步骤a中,所述生成对抗网络骨干结构为基于3D卷积和3D反卷积的生成对抗网络骨干结构。2. The generative adversarial network training method for high-dimensional data according to claim 1, wherein in the step a, the generative adversarial network backbone structure is based on the generation of 3D convolution and 3D deconvolution Adversarial network backbone structure. 3.根据权利要求2所述的针对高维数据的生成对抗网络训练方法,其特征在于,在所述步骤b中,所述使用张量火车分解算法对所述生成对抗网络骨干结构进行张量火车分解具体为:将张量火车分解算法引入所述生成对抗网络骨干结构中所有的3D卷积和3D反卷积层,对所述3D卷积和3D反卷积层进行张量火车分解,得到3D-TT-Conv层、3D-TT-deConv层。3. The generative adversarial network training method for high-dimensional data according to claim 2, characterized in that, in the step b, the use of a tensor train decomposition algorithm to tensor the generative adversarial network backbone structure The train decomposition is specifically: introducing the tensor train decomposition algorithm into all 3D convolution and 3D deconvolution layers in the backbone structure of the generative adversarial network, and performing tensor train decomposition on the 3D convolution and 3D deconvolution layers, Obtain 3D-TT-Conv layer, 3D-TT-deConv layer. 4.根据权利要求3所述的针对高维数据的生成对抗网络训练方法,其特征在于,所述对3D卷积进行张量火车分解具体包括:4. The generative adversarial network training method for high-dimensional data according to claim 3, wherein the tensor train decomposition to the 3D convolution specifically comprises: 假设输入三维数据维度为W×H×D,通道数为C,即输入张量:
Figure RE-FDA0002501828790000011
卷积核为:
Figure RE-FDA0002501828790000012
卷积后输出张量:
Figure RE-FDA0002501828790000021
Assuming that the input three-dimensional data dimension is W×H×D and the number of channels is C, that is, the input tensor:
Figure RE-FDA0002501828790000011
The convolution kernel is:
Figure RE-FDA0002501828790000012
Output tensor after convolution:
Figure RE-FDA0002501828790000021
将输出张量的每个元素写为:
Figure RE-FDA0002501828790000022
3D卷积的公式为:
Write each element of the output tensor as:
Figure RE-FDA0002501828790000022
The formula for 3D convolution is:
Figure RE-FDA0002501828790000023
Figure RE-FDA0002501828790000023
将输出张量每个通道的维度表示为:Represent the dimension of each channel of the output tensor as:
Figure RE-FDA0002501828790000029
Figure RE-FDA0002501828790000029
Figure RE-FDA00025018287900000210
Figure RE-FDA00025018287900000210
Figure RE-FDA00025018287900000211
Figure RE-FDA00025018287900000211
将输入张量
Figure RE-FDA00025018287900000212
转化为
Figure RE-FDA00025018287900000213
大小的矩阵,对应元素变换为:
the input tensor
Figure RE-FDA00025018287900000212
transform into
Figure RE-FDA00025018287900000213
The size of the matrix, the corresponding elements are transformed into:
Figure RE-FDA00025018287900000214
Figure RE-FDA00025018287900000214
将卷积核张量
Figure RE-FDA00025018287900000215
转换为大小为
Figure RE-FDA00025018287900000216
的矩阵,对应元素变换为:
convolution kernel tensor
Figure RE-FDA00025018287900000215
Convert to size as
Figure RE-FDA00025018287900000216
The matrix of , the corresponding elements are transformed into:
Figure RE-FDA00025018287900000217
Figure RE-FDA00025018287900000217
将输入矩阵X和卷积核矩阵K进行矩阵乘法,得到大小为W′H′D′×S的输出矩阵Y,将输出矩阵Y还原为输出张量
Figure RE-FDA0002501828790000024
Perform matrix multiplication on the input matrix X and the convolution kernel matrix K to obtain an output matrix Y of size W'H'D'×S, and restore the output matrix Y to an output tensor
Figure RE-FDA0002501828790000024
Figure RE-FDA0002501828790000025
Figure RE-FDA0002501828790000025
将张量火车分解应用到卷积核矩阵K:将输入输出维度进行分解:
Figure RE-FDA0002501828790000026
将矩阵K张量化为张量
Figure RE-FDA00025018287900000218
对其进行张量火车分解:
Apply the tensor train decomposition to the convolution kernel matrix K: decompose the input and output dimensions:
Figure RE-FDA0002501828790000026
Tensor the matrix K into a tensor
Figure RE-FDA00025018287900000218
Tensor train decomposition of it:
Figure RE-FDA0002501828790000027
Figure RE-FDA0002501828790000027
上述公式中:
Figure RE-FDA0002501828790000028
In the above formula:
Figure RE-FDA0002501828790000028
将输入张量
Figure RE-FDA0002501828790000032
转化为大小为W×H×D×C1×...×Cd的张量
Figure RE-FDA0002501828790000033
将该张量
Figure RE-FDA0002501828790000034
与卷积核的Tensor-Train矩阵进行运算,得到大小为
Figure RE-FDA0002501828790000035
输出张量:
the input tensor
Figure RE-FDA0002501828790000032
Converted to a tensor of size W×H×D×C 1 ×...×C d
Figure RE-FDA0002501828790000033
the tensor
Figure RE-FDA0002501828790000034
Operates with the Tensor-Train matrix of the convolution kernel to obtain a size of
Figure RE-FDA0002501828790000035
Output tensor:
Figure RE-FDA0002501828790000031
Figure RE-FDA0002501828790000031
5.根据权利要求1至4任一项所述的针对高维数据的生成对抗网络训练方法,其特征在于,在所述步骤c中,所述使用真实高维数据训练一个基于张量火车分解的张量自编码器具体为:将张量运算中的n-mode product运算引入自编码器中,使用张量运算中的n-modeproduct运算代替全连接层中输入向量和参数矩阵的matrix multiplication,直接对张量数据的维度进行放大和缩小,提取真实高维数据的空间结构特征。5. The generative adversarial network training method for high-dimensional data according to any one of claims 1 to 4, characterized in that, in the step c, the use of real high-dimensional data to train a tensor train decomposition based on The tensor autoencoder is specifically: the n-mode product operation in the tensor operation is introduced into the autoencoder, and the n-mode product operation in the tensor operation is used to replace the matrix multiplication of the input vector and parameter matrix in the fully connected layer, The dimension of tensor data is directly enlarged and reduced, and the spatial structure features of real high-dimensional data are extracted. 6.一种针对高维数据的生成对抗网络训练系统,其特征在于,包括:6. A generative adversarial network training system for high-dimensional data, comprising: 网络骨干搭建模块:用于搭建生成对抗网络骨干结构;Network backbone building module: used to build the backbone structure of generative adversarial network; 张量火车分解模块:用于使用张量火车分解算法对所述生成对抗网络骨干结构进行张量火车分解;Tensor train decomposition module: used to perform tensor train decomposition on the backbone structure of the generative adversarial network using the tensor train decomposition algorithm; 张量自编码器训练模块:用于使用真实高维数据训练一个基于张量火车分解的张量自编码器,通过所述张量自编码器输出真实高维数据的空间结构特征;Tensor autoencoder training module: used to train a tensor autoencoder based on tensor train decomposition using real high-dimensional data, and output the spatial structure features of real high-dimensional data through the tensor autoencoder; 网络训练模块:用于将所述张量自编码器的输出和判别器生成的最后一层特征相结合作为最后一层的输入,对生成对抗网络进行训练。Network training module: used to combine the output of the tensor autoencoder and the features of the last layer generated by the discriminator as the input of the last layer to train the generative adversarial network. 7.根据权利要求6所述的针对高维数据的生成对抗网络训练系统,其特征在于,所述生成对抗网络骨干结构为基于3D卷积和3D反卷积的生成对抗网络骨干结构。7. The generative adversarial network training system for high-dimensional data according to claim 6, wherein the generative adversarial network backbone structure is a generative adversarial network backbone structure based on 3D convolution and 3D deconvolution. 8.根据权利要求7所述的针对高维数据的生成对抗网络训练系统,其特征在于,所述张量火车分解模块使用张量火车分解算法对所述生成对抗网络骨干结构进行张量火车分解具体为:将张量火车分解算法引入所述生成对抗网络骨干结构中所有的3D卷积和3D反卷积层,对所述3D卷积和3D反卷积层进行张量火车分解,得到3D-TT-Conv层、3D-TT-deConv层。8 . The generative adversarial network training system for high-dimensional data according to claim 7 , wherein the tensor train decomposition module uses a tensor train decomposition algorithm to perform tensor train decomposition on the generative adversarial network backbone structure. 9 . Specifically, the tensor train decomposition algorithm is introduced into all 3D convolution and 3D deconvolution layers in the backbone structure of the generative adversarial network, and the 3D convolution and 3D deconvolution layers are decomposed by tensor train to obtain 3D - TT-Conv layer, 3D-TT-deConv layer. 9.根据权利要求8所述的针对高维数据的生成对抗网络训练系统,其特征在于,所述对3D卷积进行张量火车分解具体包括:9. The generative adversarial network training system for high-dimensional data according to claim 8, wherein the tensor train decomposition to the 3D convolution specifically comprises: 假设输入三维数据维度为W×H×D,通道数为C,即输入张量:
Figure RE-FDA0002501828790000041
卷积核为:
Figure RE-FDA0002501828790000042
卷积后输出张量:
Figure RE-FDA0002501828790000043
Assuming that the input three-dimensional data dimension is W×H×D and the number of channels is C, that is, the input tensor:
Figure RE-FDA0002501828790000041
The convolution kernel is:
Figure RE-FDA0002501828790000042
Output tensor after convolution:
Figure RE-FDA0002501828790000043
将输出张量的每个元素写为:
Figure RE-FDA0002501828790000044
3D卷积的公式为:
Write each element of the output tensor as:
Figure RE-FDA0002501828790000044
The formula for 3D convolution is:
Figure RE-FDA0002501828790000045
Figure RE-FDA0002501828790000045
将输出张量每个通道的维度表示为:Represent the dimension of each channel of the output tensor as:
Figure RE-FDA0002501828790000046
Figure RE-FDA0002501828790000046
Figure RE-FDA0002501828790000047
Figure RE-FDA0002501828790000047
Figure RE-FDA0002501828790000048
Figure RE-FDA0002501828790000048
将输入张量
Figure RE-FDA0002501828790000049
转化为
Figure RE-FDA00025018287900000410
大小的矩阵,对应元素变换为:
the input tensor
Figure RE-FDA0002501828790000049
transform into
Figure RE-FDA00025018287900000410
The size of the matrix, the corresponding elements are transformed into:
Figure RE-FDA00025018287900000411
Figure RE-FDA00025018287900000411
将卷积核张量
Figure RE-FDA00025018287900000412
转换为大小为
Figure RE-FDA00025018287900000413
的矩阵,对应元素变换为:
convolution kernel tensor
Figure RE-FDA00025018287900000412
Convert to size as
Figure RE-FDA00025018287900000413
The matrix of , the corresponding elements are transformed into:
Figure RE-FDA0002501828790000054
Figure RE-FDA0002501828790000054
将输入矩阵X和卷积核矩阵K进行矩阵乘法,得到大小为W′H′D′×S的输出矩阵Y,将输出矩阵Y还原为输出张量
Figure RE-FDA0002501828790000055
Perform matrix multiplication on the input matrix X and the convolution kernel matrix K to obtain an output matrix Y of size W'H'D'×S, and restore the output matrix Y to an output tensor
Figure RE-FDA0002501828790000055
Figure RE-FDA0002501828790000056
Figure RE-FDA0002501828790000056
将张量火车分解应用到卷积核矩阵K:将输入输出维度进行分解:
Figure RE-FDA0002501828790000057
将矩阵K张量化为张量
Figure RE-FDA0002501828790000058
对其进行张量火车分解:
Apply the tensor train decomposition to the convolution kernel matrix K: decompose the input and output dimensions:
Figure RE-FDA0002501828790000057
Tensor the matrix K into a tensor
Figure RE-FDA0002501828790000058
Tensor train decomposition of it:
Figure RE-FDA0002501828790000051
Figure RE-FDA0002501828790000051
上述公式中:
Figure RE-FDA0002501828790000052
In the above formula:
Figure RE-FDA0002501828790000052
将输入张量
Figure RE-FDA0002501828790000059
转化为大小为W×H×D×C1×...×Cd的张量
Figure RE-FDA00025018287900000510
将该张量
Figure RE-FDA00025018287900000511
与卷积核的Tensor-Train矩阵进行运算,得到大小为
Figure RE-FDA00025018287900000512
输出张量:
the input tensor
Figure RE-FDA0002501828790000059
Converted to a tensor of size W×H×D×C 1 ×...×C d
Figure RE-FDA00025018287900000510
the tensor
Figure RE-FDA00025018287900000511
Operates with the Tensor-Train matrix of the convolution kernel to obtain a size of
Figure RE-FDA00025018287900000512
Output tensor:
Figure RE-FDA0002501828790000053
Figure RE-FDA0002501828790000053
10.根据权利要求6至9任一项所述的针对高维数据的生成对抗网络训练系统,其特征在于,所述张量自编码器训练模块使用真实高维数据训练一个基于张量火车分解的张量自编码器具体为:将张量运算中的n-mode product运算引入自编码器中,使用张量运算中的n-mode product运算代替全连接层中输入向量和参数矩阵的matrix multiplication,直接对张量数据的维度进行放大和缩小,提取真实高维数据的空间结构特征。10. The generative adversarial network training system for high-dimensional data according to any one of claims 6 to 9, wherein the tensor autoencoder training module uses real high-dimensional data to train a tensor train decomposition based on The tensor autoencoder is specifically: the n-mode product operation in the tensor operation is introduced into the autoencoder, and the n-mode product operation in the tensor operation is used to replace the matrix multiplication of the input vector and parameter matrix in the fully connected layer. , directly enlarge and reduce the dimension of tensor data, and extract the spatial structure features of real high-dimensional data. 11.一种电子设备,包括:11. An electronic device comprising: 至少一个处理器;以及at least one processor; and 与所述至少一个处理器通信连接的存储器;其中,a memory communicatively coupled to the at least one processor; wherein, 所述存储器存储有可被所述一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行上述1至5任一项所述的针对高维数据的生成对抗网络训练方法的以下操作:The memory stores instructions executable by the one processor, and the instructions are executed by the at least one processor to enable the at least one processor to perform any of the above 1 to 5 for high-speed operation. The following operations of the generative adversarial network training method for dimensional data: 步骤a:搭建生成对抗网络骨干结构;Step a: Build the backbone structure of the generative adversarial network; 步骤b:使用张量火车分解算法对所述生成对抗网络骨干结构进行张量火车分解;Step b: use the tensor train decomposition algorithm to perform tensor train decomposition on the backbone structure of the generative adversarial network; 步骤c:使用真实高维数据训练一个基于张量火车分解的张量自编码器,通过所述张量自编码器输出真实高维数据的空间结构特征;Step c: use real high-dimensional data to train a tensor autoencoder based on tensor train decomposition, and output spatial structure features of real high-dimensional data through the tensor autoencoder; 步骤d:将所述张量自编码器的输出和判别器生成的最后一层特征相结合作为最后一层的输入,对生成对抗网络进行训练。Step d: The output of the tensor autoencoder and the features of the last layer generated by the discriminator are combined as the input of the last layer to train the generative adversarial network.
CN201911343340.3A 2019-12-24 2019-12-24 Method and system for training generation countermeasure network for high-dimensional data and electronic equipment Pending CN111340173A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911343340.3A CN111340173A (en) 2019-12-24 2019-12-24 Method and system for training generation countermeasure network for high-dimensional data and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911343340.3A CN111340173A (en) 2019-12-24 2019-12-24 Method and system for training generation countermeasure network for high-dimensional data and electronic equipment

Publications (1)

Publication Number Publication Date
CN111340173A true CN111340173A (en) 2020-06-26

Family

ID=71183320

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911343340.3A Pending CN111340173A (en) 2019-12-24 2019-12-24 Method and system for training generation countermeasure network for high-dimensional data and electronic equipment

Country Status (1)

Country Link
CN (1) CN111340173A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112992304A (en) * 2020-08-24 2021-06-18 湖南数定智能科技有限公司 High-resolution pinkeye case data generation method, equipment and storage medium
CN114302150A (en) * 2021-12-30 2022-04-08 北京超维景生物科技有限公司 Video encoding method and device, video decoding method and device, and electronic equipment
CN116186575A (en) * 2022-09-09 2023-05-30 武汉中数医疗科技有限公司 Mammary gland sampling data processing method based on machine learning
CN119204266A (en) * 2024-11-22 2024-12-27 中科南京人工智能创新研究院 Dataset construction method and system for training large models in professional fields

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112992304A (en) * 2020-08-24 2021-06-18 湖南数定智能科技有限公司 High-resolution pinkeye case data generation method, equipment and storage medium
CN112992304B (en) * 2020-08-24 2023-10-13 湖南数定智能科技有限公司 High-resolution red eye case data generation method, device and storage medium
CN114302150A (en) * 2021-12-30 2022-04-08 北京超维景生物科技有限公司 Video encoding method and device, video decoding method and device, and electronic equipment
CN114302150B (en) * 2021-12-30 2024-02-27 北京超维景生物科技有限公司 Video encoding method and device, video decoding method and device and electronic equipment
CN116186575A (en) * 2022-09-09 2023-05-30 武汉中数医疗科技有限公司 Mammary gland sampling data processing method based on machine learning
CN116186575B (en) * 2022-09-09 2024-02-02 武汉中数医疗科技有限公司 A method for processing breast sampling data based on machine learning
CN119204266A (en) * 2024-11-22 2024-12-27 中科南京人工智能创新研究院 Dataset construction method and system for training large models in professional fields

Similar Documents

Publication Publication Date Title
JP7373554B2 (en) Cross-domain image transformation
CN111340173A (en) Method and system for training generation countermeasure network for high-dimensional data and electronic equipment
CN109522902B (en) Extraction of space-time feature representations
WO2020118608A1 (en) Deconvolutional neural network hardware acceleration method, apparatus, and electronic device
CN110060286B (en) Monocular depth estimation method
US12249048B2 (en) Score-based generative modeling in latent space
CN114863219A (en) Training large-scale visual Transformer neural network
CN114120413A (en) Model training method, image synthesis method, apparatus, equipment and program product
CN116740605A (en) An event identification method, system, equipment and medium
CN113344213A (en) Knowledge distillation method, knowledge distillation device, electronic equipment and computer readable storage medium
CN114708374A (en) Virtual image generation method, device, electronic device and storage medium
CN117351299A (en) Image generation and model training methods, devices, equipment and storage media
CN114119869A (en) Three-dimensional reconstruction method, system, machine device, and computer-readable storage medium
CN117808659A (en) Method, system, apparatus and medium for performing multidimensional convolution operations
US20220405583A1 (en) Score-based generative modeling in latent space
DE102021124428A1 (en) TRAIN ENERGY-BASED VARIATIONAL AUTOENCODERS
CN113327194A (en) Image style migration method, device, equipment and storage medium
CN112330535A (en) Picture style migration method
CN117218031A (en) Image reconstruction method, device and medium based on DeqNLNet algorithm
CN117196959A (en) Self-attention-based infrared image super-resolution method, device and readable medium
CN117671371A (en) A visual task processing method and system based on agent attention
CN113240780B (en) Method and device for generating animation
CN116188251A (en) Model construction method, virtual image generation method, device, equipment and medium
CN116503549A (en) A high-fidelity 3D content generation method and system
CN115906987A (en) Deep learning model training method, virtual image driving method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200626