CN111726472B - Image anti-interference method based on encryption algorithm - Google Patents
Image anti-interference method based on encryption algorithm Download PDFInfo
- Publication number
- CN111726472B CN111726472B CN202010371154.7A CN202010371154A CN111726472B CN 111726472 B CN111726472 B CN 111726472B CN 202010371154 A CN202010371154 A CN 202010371154A CN 111726472 B CN111726472 B CN 111726472B
- Authority
- CN
- China
- Prior art keywords
- image
- data set
- matrix
- network
- encrypted
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/32—Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
- H04N1/32101—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
- H04N1/32144—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
- H04N1/32149—Methods relating to embedding, encoding, decoding, detection or retrieval operations
- H04N1/32267—Methods relating to embedding, encoding, decoding, detection or retrieval operations combined with processing of the image
- H04N1/32272—Encryption or ciphering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
- Collating Specific Patterns (AREA)
Abstract
An image anti-interference method based on an encryption algorithm comprises the following steps: step 1: constructing an original handwriting volume data set and preprocessing an image; step 2: encrypting an original handwriting volume data set by using an image encryption technology based on matrix transformation to construct an encrypted data set; and step 3: constructing a generative confrontation network and training by using an original data set, wherein the network comprises a generator and a discriminator; and 4, step 4: constructing a seven-layer convolutional neural network and respectively training by utilizing an original data set and an encrypted data set; and 5: label prediction of handwritten image data is made based on a trained convolutional neural network and a generative confrontation network. The invention adopts a deep learning algorithm represented by a generative countermeasure network, utilizes a specific image encryption technology based on matrix transformation to convert an image space, and identifies images interfered by the deep learning technology.
Description
Technical Field
The invention relates to digital image encryption and a deep neural network, in particular to an image anti-interference method based on an encryption algorithm.
Background
With the development of deep learning techniques, research using deep neural networks has become mainstream in the field of image generation. A Generative Adaptive Network (GAN) is widely used in the field of image generation because it can generate clearer and truer samples and can more easily design loss functions. .
In 2017, a user of a Reddit website puts forward a Deepfake technology and opens a source, and video synthesis tools such as FakeApp and the like are derived immediately. The technology based on the GAN can realize the replacement of the human face in the original picture by other human faces, thereby achieving the purpose of falsifying and falsifying. In addition to the security risks that have broken out, the potential effects of deepake will spread to the public's information acquisition and social trust level. The use of deep learning techniques to detect and discriminate false images has become a direction of interest to many researchers over the past two years. Studying how to identify the disturbed images can help us to obtain the correct information.
Digital image encryption technology has been widely studied because it enables secure storage and secure transmission of image information and improves the security of image data. Digital image encryption techniques can be mainly classified into the following three categories: the image encryption technology based on matrix transformation or pixel replacement, the image encryption technology based on a modern cryptosystem and the image encryption technology based on a chaos theory. The image encryption technology based on matrix transformation can be basically equivalent to the finite-step elementary matrix transformation on an image matrix, so that the arrangement position of image pixels is disturbed. The image encryption technology based on the modern cryptosystem is to regard the image to be transmitted as a plaintext, and achieve the secret communication of the image data under the control of a secret key through various encryption algorithms. The image encryption technology based on the chaos theory is to regard image information to be encrypted as binary data flow according to a certain coding mode, and encrypt the image data flow by utilizing a chaos signal. Since the digital image encryption technology can process and transform the image in the spatial domain, the technology can be applied to detecting the disturbed image and provides reference for judging the false image for people.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides an image anti-interference method based on an encryption algorithm, which adopts a deep learning algorithm represented by a generative countermeasure network, utilizes a specific image encryption technology based on matrix transformation to convert an image space and identifies images interfered by the deep learning technology.
The technical scheme adopted by the invention for solving the technical problems is as follows:
an image anti-interference method based on an encryption algorithm comprises the following steps:
step 1: constructing an original handwriting volume data set and preprocessing an image;
step 2: encrypting an original handwriting volume data set by using an image encryption technology based on matrix transformation to construct an encrypted data set;
and step 3: constructing a generative confrontation network and training by using an original data set, wherein the network comprises a generator and a discriminator;
and 4, step 4: constructing a seven-layer convolutional neural network and respectively training by using an original data set and an encrypted data set, wherein the network comprises an input layer, two convolutional layers, two pooling layers, a full-connection layer and an output layer;
and 5: label prediction of handwritten image data is made based on a trained convolutional neural network and a generative confrontation network.
Further, in the step 1, a handwritten digital image data set and a label set containing different writing conditions and writing habits are produced, and the data set is preprocessed. The preprocessing process is to graye the color image and normalize the grayscale image into a uniform size; graying adopts a three-component weighted average method, and the calculation formula is as follows:
Gray(i,j)=0.299R(i,j)+0.578G(i,j)+0.114B(i,j)
wherein: gray (i, j) represents the pixel value of a pixel point of a grayed picture, R represents the pixel value information of a red space, G represents the pixel value information of a green space, and B represents the pixel value information of a blue space. Graying and normalization of the image can improve the calculation speed and retain gradient information.
Still further, in step 2, encrypting the original handwriting volume data set by using an image encryption technique based on matrix transformation includes the following processes:
2.1) taking m continuous pixels and replacing the m continuous pixels by m encrypted pixels, wherein the replacing process is abbreviated as: KP where P is the original pixel block to be replaced by the encrypted pixel block, K is the key to complete this matrix transformation, the key size is consistent with the original image, so the decryption process is written as: p ═ K-1C
2.2) As can be seen from the above formula, in order to enable decryption of the encrypted image, the key must be a reversible matrix, and in the encryption algorithm, the key matrix is a involution matrix, i.e. the matrix is the inverse of itself.
Order toFor an n × n involution matrix, the matrix A is divided into 4 smaller matrices, i.e.Because A is2I is an n-order identity matrix, which is deduced as follows:
A11+A22=0
from the above, to generate an involution matrix A, A22Is any one ofA matrix, and A11=-A22Let A12=k(I-A11) Or k (I + A)11) Then, thenOrk is an arbitrary value;
2.3) obtaining the key of the encrypted image in the step 2.2, and encrypting the original data set in the step 1 by using the key to obtain an encrypted data set.
The encryption algorithm utilizes the property of the involution matrix and corresponding inference to conveniently and quickly construct a reversible matrix for image encryption and decryption.
In said step 3, a generative confrontation network is constructed, which comprises a generatorAnd a discriminatorWhen training sampleThis x input generatorA disturbance is generatedThen the generated sampleThe function of the discriminator is to distinguish the generated sample from the original sample, the goal is to make the two difficult to distinguish, the loss function of the whole network is:
in order to limit the size of the generated disturbance and make the network training more stable, a hinge loss function needs to be added to the L2 norm:
where c is a constant, therefore, the goal of the training network is written as:
wherein alpha and beta are parameters for controlling the relative importance of the training target, and after the generative confrontation network is constructed, the original data set is input into the network for training.
In the step 4, a seven-layer convolutional neural network is constructed, the network comprises an input layer, two convolutional layers, two pooling layers, a full-link layer and an output layer, the original data set and the encrypted data set are respectively input during training, after basic parameters and specifications of the network are set, a fixed number of samples are selected and input into the convolutional neural network each time, an actual label of a training sample can be obtained at the output layer, the actual output is compared with the label to obtain a residual error, and the weight and the bias of the network are adjusted by combining a back propagation algorithm.
In step 5, the label prediction step of the handwritten image data is as follows:
5.1) handwritten image samples ioInputting the generation type confrontation network model trained in the step 3 to obtain an image sample i after being interfered by the neural networkn;
5.2) sampling handwritten image ioEncrypting through the encryption algorithm in the step 2 to obtain an encrypted sample ien;
5.3) image samples i to be disturbed by the neural networknAnd encrypting the image sample ienInputting the convolutional neural network model trained in the step 4, performing label prediction, and comparing the sample inLabel and sample ienThe consistency of the labels can determine whether the image sample is disturbed.
The invention has the beneficial effects that: a deep learning algorithm represented by a generative countermeasure network is adopted, a specific image encryption technology based on matrix transformation is utilized to convert an image space, and images interfered by the deep learning technology are identified.
Drawings
FIG. 1 is a flowchart of an image anti-interference method based on an encryption algorithm according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a convolutional neural network structure according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a generative countermeasure network according to an embodiment of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
Referring to fig. 1-3, an image anti-interference method based on an encryption algorithm includes constructing and preprocessing a data set, constructing and training a generative countermeasure network and a convolutional neural network, and judging image data subjected to interference.
The invention comprises the following steps:
step 1: constructing an original handwriting volume data set and preprocessing an image;
step 2: encrypting an original handwriting volume data set by using an image encryption technology based on matrix transformation to construct an encrypted data set;
and step 3: constructing a generative confrontation network and training by using an original data set, wherein the network comprises a generator and a discriminator;
and 4, step 4: constructing a seven-layer convolutional neural network and respectively training by using an original data set and an encrypted data set, wherein the network comprises an input layer, two convolutional layers, two pooling layers, a full-connection layer and an output layer;
and 5: label prediction of handwritten image data is made based on a trained convolutional neural network and a generative confrontation network.
Further, in the step 1, 10000 handwritten numbers are written and obtained according to different writing conditions and writing habits, a data set and a label set of the handwritten digital images are manufactured, the data set is the preprocessed handwritten digital images, and the preprocessing process is to graye the color images and normalize the grayscale images to 28 × 28; graying adopts a three-component weighted average method, and the calculation formula is as follows:
Gray(i,j)=0.299R(i,j)+0.578G(i,j)+0.114B(i,j)
wherein: gray (i, j) represents the pixel value of a pixel point of a Gray picture, R represents the pixel value information of a red space, G represents the pixel value information of a green space, B represents the pixel value information of a blue space, and the Gray and normalization of the image can improve the calculation speed and retain the gradient information.
Still further, in step 2, encrypting the original handwriting volume data set by using an image encryption technique based on matrix transformation includes the following processes:
2.1) taking m consecutive pixels and replacing them with m encrypted pixels, e.g. with four pixels P11,P12,P21,P22Then four encrypted pixels C are used11,C12,C21,C22To replace them. The process can be represented in a matrix as follows:
the above matrix representation can be abbreviated as: c is KP;
where P is the original pixel block to be replaced by the encrypted pixel block, K is the key to complete this matrix transformation, the key size is consistent with the original image, so the decryption process is written as: p ═ K-1C;
2.2) as can be seen from the above formula, in order to enable decryption of the encrypted image, the key must be a reversible matrix, in the encryption algorithm, the key matrix is a involution matrix, i.e. the matrix is the inverse of itself,
order toFor an n × n involution matrix, the matrix A is divided into 4 smaller matrices, i.e.Because A is2I is an n-order identity matrix, which is deduced as follows:
A11+A22=0
from the above, to generate an involution matrix A, A22Is any one ofA matrix, and A11=-A22. Let A12=k(I-A11) Or k (I + A)11) Then, thenOrk isAn arbitrary value;
2.3) obtaining the key of the encrypted image in the step 2.2, and encrypting the original data set in the step 1 by using the key to obtain an encrypted data set.
The encryption algorithm utilizes the property of the involution matrix and corresponding inference to conveniently and quickly construct a reversible matrix for image encryption and decryption.
In said step 3, a generative confrontation network is constructed, which comprises a generatorAnd a discriminatorWhen training sample x is input into the generatorA disturbance is generatedThen the generated sampleThe function of the discriminator is to distinguish the generated sample from the original sample, the goal is to make the two difficult to distinguish, the loss function of the whole network is:
in order to limit the size of the generated disturbance and make the network training more stable, a hinge loss function needs to be added to the L2 norm:
wherein c is a constant. Thus, the goal of the training network is written as:
where α and β are parameters that control the relative importance of the training objectives. After the generative confrontation network is constructed, the original data set is input into the network for training.
In the step 4, a seven-layer convolutional neural network is constructed, the network comprises an input layer, two convolutional layers, two pooling layers, a full-connection layer and an output layer, neurons in the convolutional layers are connected with an upper layer through local receptive fields, the local features are extracted through convolution operation, the layers use monotonically increasing sigmoid functions as activation functions, one pooling layer is followed behind each convolutional layer, secondary extraction is carried out by adopting a pooling method, and dimension reduction is carried out on pictures. During training, the original data set and the encrypted data set are input respectively. After basic parameters and specifications of the network are set, a fixed number of samples are selected and input into the convolutional neural network each time, an actual label of a training sample can be obtained at an output layer, actual output is compared with the label to obtain a residual error, and the weight and the bias of the network are adjusted by combining a back propagation algorithm.
In step 5, the label prediction step of the handwritten image data is as follows:
5.1) handwritten image samples ioInputting the generation type confrontation network model trained in the step 3 to obtain an image sample i after being interfered by the neural networkn;
5.2) sampling handwritten image ioEncrypting through the encryption algorithm in the step 2 to obtain an encrypted sample ien;
5.3) image samples i to be disturbed by the neural networknAnd encrypting the image sample ienInputting the convolutional neural network model trained in the step 4, performing label prediction, and comparing the sample inLabel and sample ienThe consistency of the labels can determine whether the image sample is disturbed.
As described above, the embodiment of the invention for judging the interfered image by using the encryption algorithm is introduced, the invention realizes the judgment of the interfered image by constructing and preprocessing the handwritten image data, carrying out encryption processing on the handwritten image data by using the encryption algorithm, constructing and training a generating type confrontation network and a convolution neural network and predicting a label. The invention provides a new method for judging whether an image is interfered by a neural network based on an encryption algorithm.
The above-mentioned embodiments are only preferred embodiments of the present invention, which are merely illustrative and not restrictive, and any person skilled in the art may substitute or change the technical solution of the present invention and the inventive concept thereof within the scope of the present invention.
Claims (6)
1. An image anti-interference method based on an encryption algorithm is characterized by comprising the following steps:
step 1: constructing an original handwriting volume data set and preprocessing an image;
step 2: encrypting an original handwriting volume data set by using an image encryption technology based on matrix transformation to construct an encrypted data set;
and step 3: constructing a generative confrontation network comprising a generator and training using raw handwriting volume data setsAnd a discriminator
And 4, step 4: constructing a convolutional neural network and respectively training by utilizing an original handwriting data set and an encrypted data set;
and 5: label prediction of handwritten image data is carried out based on the trained convolutional neural network and the generating countermeasure network;
in step 5, the label prediction step of the handwritten image data is as follows:
5.1) handwritingVolume image sample ioInputting a trained generative confrontation network model to obtain an image sample i after being interfered by a neural networkn;
5.2) sampling handwritten image ioEncrypting through an image encryption technology to obtain an encrypted sample ien;
5.3) image samples i to be disturbed by the neural networknAnd encrypting the image sample ienInputting a trained convolutional neural network model, performing label prediction, and comparing a sample inLabel and sample ienThe consistency of the labels can determine whether the image sample is disturbed.
2. The image anti-interference method based on the encryption algorithm according to claim 1, characterized in that: in the step 1, a handwritten digital image data set and a label set containing different writing conditions and writing habits are manufactured, the data set is preprocessed, the preprocessing process is to gray the color image, the graying adopts a three-component weighted average method, and then the grayscale image is normalized to be in a uniform size.
3. An image anti-interference method based on encryption algorithm according to claim 1 or 2, characterized in that: in the step 2, an encryption matrix is constructed by using an image encryption technology based on matrix transformation, the original handwriting volume data set is encrypted, and the size of the encrypted image matrix is consistent with that of the original image matrix, so that an encrypted data set is obtained.
4. The image anti-interference method based on the encryption algorithm according to claim 3, characterized in that: the image encryption technology comprises the following processes:
2.1) taking any square pixel block in the normalized gray level image matrix, and multiplying the square pixel block by the encryption matrix to obtain an encrypted pixel block, wherein the formula is as follows: KP, where P is the original block of pixels to be replaced by the block of encrypted pixels, K is the key to complete this matrix transformation, the key size remaining consistent with the original block of pixels, so the decryption process is written as: p ═ K-1C;
2.2) As can be seen from the above formula, in order to enable decryption of the encrypted image, the key must be a reversible matrix, and in the encryption algorithm, the key matrix is a involution matrix, i.e. the matrix is the inverse of itself, such thatFor an n × n involution matrix, the matrix A may be divided into 4 smaller matrices, i.e.Because A is2I is an n-order identity matrix, which is deduced as follows:
A11+A22=0
from the above, to generate an involution matrix A, A22Can be anyA matrix, and A11=-A22Let A12=k(I-A11) Or k (I + A)11) Then, thenOrk is an arbitrary value;
2.3) obtaining the key of the encrypted image by 2.2), and encrypting the original handwriting data set by using the key to obtain an encrypted data set.
5. An image anti-interference method based on encryption algorithm according to claim 1 or 2, characterized in that: in step 3, a generative countermeasure network is constructed, and the loss function of the network is:
in order to limit the size of the generated disturbance and make the network training more stable, a hinge loss function needs to be added to the L2 norm:
where c is a constant, therefore, the goal of the training network is written as:
wherein alpha and beta are parameters for controlling the relative importance of the training target, and after the generative confrontation network is constructed, the original handwriting data set is input into the network for training.
6. An image anti-interference method based on encryption algorithm according to claim 1 or 2, characterized in that: in the step 4, a seven-layer convolutional neural network is constructed, the network comprises an input layer, two convolutional layers, two pooling layers, a full-link layer and an output layer, the original handwriting data set and the encrypted data set are respectively input during training, after basic parameters and specifications of the network are set, a fixed number of samples are selected and input into the convolutional neural network each time, an actual label of the training sample can be obtained at the output layer, the actual output is compared with the label to obtain a residual error, and the weight and the bias of the network are adjusted by combining a back propagation algorithm.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010371154.7A CN111726472B (en) | 2020-05-06 | 2020-05-06 | Image anti-interference method based on encryption algorithm |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010371154.7A CN111726472B (en) | 2020-05-06 | 2020-05-06 | Image anti-interference method based on encryption algorithm |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111726472A CN111726472A (en) | 2020-09-29 |
CN111726472B true CN111726472B (en) | 2022-04-08 |
Family
ID=72564189
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010371154.7A Active CN111726472B (en) | 2020-05-06 | 2020-05-06 | Image anti-interference method based on encryption algorithm |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111726472B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112417467B (en) * | 2020-10-26 | 2022-12-06 | 南昌大学 | Image encryption method based on anti-neurocryptography and SHA control chaos |
CN112398641A (en) * | 2020-11-17 | 2021-02-23 | 上海桂垚信息科技有限公司 | Application method of AES encryption algorithm on encryption chip |
CN112488294A (en) * | 2020-11-20 | 2021-03-12 | 北京邮电大学 | Data enhancement system, method and medium based on generation countermeasure network |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107153810A (en) * | 2016-03-04 | 2017-09-12 | 中国矿业大学 | A kind of Handwritten Numeral Recognition Method and system based on deep learning |
CN107958259A (en) * | 2017-10-24 | 2018-04-24 | 哈尔滨理工大学 | A kind of image classification method based on convolutional neural networks |
CN110490128A (en) * | 2019-08-16 | 2019-11-22 | 南京邮电大学 | A kind of hand-written recognition method based on encryption neural network |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10825219B2 (en) * | 2018-03-22 | 2020-11-03 | Northeastern University | Segmentation guided image generation with adversarial networks |
-
2020
- 2020-05-06 CN CN202010371154.7A patent/CN111726472B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107153810A (en) * | 2016-03-04 | 2017-09-12 | 中国矿业大学 | A kind of Handwritten Numeral Recognition Method and system based on deep learning |
CN107958259A (en) * | 2017-10-24 | 2018-04-24 | 哈尔滨理工大学 | A kind of image classification method based on convolutional neural networks |
CN110490128A (en) * | 2019-08-16 | 2019-11-22 | 南京邮电大学 | A kind of hand-written recognition method based on encryption neural network |
Also Published As
Publication number | Publication date |
---|---|
CN111726472A (en) | 2020-09-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zhang et al. | Deep learning-enabled semantic communication systems with task-unaware transmitter and dynamic data | |
CN110084734B (en) | Big data ownership protection method based on object local generation countermeasure network | |
CN113850272B (en) | Federal learning image classification method based on local differential privacy | |
CN111654368B (en) | Key generation method for generating countermeasure network based on deep learning | |
CN111726472B (en) | Image anti-interference method based on encryption algorithm | |
CN108764270B (en) | Information hiding detection method integrated by using convolutional neural network | |
Bisht et al. | A color image encryption technique based on bit-level permutation and alternate logistic maps | |
CN111951149B (en) | Image information steganography method based on neural network | |
CN115378574B (en) | Lightweight dynamic image data encryption method and system | |
CN114339258B (en) | Information steganography method and device based on video carrier | |
Khalifa et al. | Image steganalysis in frequency domain using co-occurrence matrix and Bpnn | |
CN115758422A (en) | File encryption method and system | |
Xiang et al. | A new convolutional neural network-based steganalysis method for content-adaptive image steganography in the spatial domain | |
Su et al. | Visualized multiple image selection encryption based on log chaos system and multilayer cellular automata saliency detection | |
Lu et al. | Secure halftone image steganography based on feature space and layer embedding | |
Li et al. | Privacy protection method based on multidimensional feature fusion under 6G networks | |
CN113362216A (en) | Deep learning model encryption method and device based on backdoor watermark | |
CN116341004B (en) | Longitudinal federal learning privacy leakage detection method based on feature embedding analysis | |
Xu et al. | Lightweight and unobtrusive privacy preservation for remote inference via edge data obfuscation | |
CN117131520A (en) | Two-stage image privacy protection method and system based on dynamic mask and generation recovery | |
Kich et al. | Image steganography scheme using dilated convolutional network | |
CN116091891A (en) | Image recognition method and system | |
Huang et al. | Towards generalized deepfake detection with continual learning on limited new data | |
Gao et al. | An Improved Image Processing Based on Deep Learning Backpropagation Technique | |
Al-Abaidy | Optimal Use Of ANN In The Integration Between Digital Image Processing And Encryption Technique |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |