CN110334069A - Data sharing method and relevant apparatus between multi-process - Google Patents
Data sharing method and relevant apparatus between multi-process Download PDFInfo
- Publication number
- CN110334069A CN110334069A CN201910620883.9A CN201910620883A CN110334069A CN 110334069 A CN110334069 A CN 110334069A CN 201910620883 A CN201910620883 A CN 201910620883A CN 110334069 A CN110334069 A CN 110334069A
- Authority
- CN
- China
- Prior art keywords
- file
- data
- application
- cache
- cache file
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/17—Details of further file system functions
- G06F16/172—Caching, prefetching or hoarding of files
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/17—Details of further file system functions
- G06F16/176—Support for shared access to files; File sharing support
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/544—Buffers; Shared memory; Pipes
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The present invention provides the data sharing method and relevant apparatus between a kind of multi-process, and method includes the application of each application container carrying on same host by MMAP sharing mode mapped cache file, realizes shared buffer memory memory-based.Each application on same host carries out data access using shared buffer memory memory-based.On the one hand, process access shared buffer memory data are convenient and efficient just as access stack memory or heap memory, improve the data exchange efficiency between process;On the other hand, shared buffer memory data are only a in the physical memory of host, effectively reduce the expense of system protocol stack, improve the overall performance of system;Further, master control component is loaded by caching, after each host shared buffer memory data in distributed computing system load successfully, so that shared buffer memory data uniformly come into force in each host, ensure that the data cached consistency of each calculate node.
Description
Technical field
The present invention relates to distributed computing system fields, more specifically to data sharing method, the dress between multi-process
It sets, the equipment such as system and readable storage medium storing program for executing.
Background technique
In container technique, such as Docker, Rocket etc. before appearance, disposes multiple isomorphisms on same computer
Using or heterogeneous applications be very difficult.The underlying database that different applications rely on may have a conflict, and different application it
Between may seize the computing resources such as CPU (Central Processing Unit, central processing unit) and memory.And based on appearance
The resource isolation characteristic of device disposes multiple applications on host, and the computing resource of the every nook and cranny of host is all used, is mentioned
High 5~10 times of computing resource utilization rate.At the same time, container technique combination micro services framework, by monomer-type application program point
The micro services of more fritter are segmented into, naturally construct distributed computing system on separate unit host.
In distributed computing system, in order to accelerate the access to servers' data, the process in calculate node is usual
Caching can be introduced to manage the data for inheriting access.Meanwhile under distributed scene, the data of server end may occur at any time
Variation, for the actual effect for the caching that each process of guarantee maintains, the caching of management of process needs sporadically to update.However, nothing
By being each calculate node periodically from server-side pulling data or server-side not timing propelling data to each nodal cache, can
Memory consumption is brought to system.
For multithreading model, due to sharing address space, we can realize data storage using map;And for into
Journey data sharing needs to rely on the shared drive mechanism of operating system offer, such as SysVSHM.For the data of read-only type,
In order to further increase performance, using no lock mechanism, the update of caching is realized in such a way that A/B block switches.However, due to every
A Docker running example has the name space (Namesaces) of oneself, therefore can not continue using SysVSHM mechanism come real
Shared drive between existing process.
Summary of the invention
In view of this, the present invention proposes that data sharing method, device, system and readable storage medium storing program for executing between multi-process etc. are set
It is standby, the purpose of the high-speed communication between multi-process to be realized.
To achieve the goals above, it is proposed that scheme it is as follows:
A kind of data sharing method between multi-process, comprising:
By the CACHE DIRECTORY in application container each on same host, it is mapped to the same file folder of the host
Under;
The data of the application cache of the application container carrying each on same host are generated cache file, and there are institutes
It states under file;
The application of each application container carrying maps the file by MMAP (Memory Mapping File) sharing mode
Cache file under folder;
Application container carrying application update it is data cached after, update the cache file under the file and more
The cache file path that the new soft link of file is directed toward;
Each application container carrying is applied before accessing the cache file under the file every time, and file is detected
Whether the cache file path that soft link is directed toward changes, if so, the release of the application in each application container is former
Carry out the mapping to cache file, and the cache file behind change cache file path is mapped by MMAP sharing mode.
Optionally, it after the application update in application container carrying is data cached, updates under the file
Cache file simultaneously updates file soft the step of linking the cache file path being directed toward, and specifically includes:
After the application update of application container carrying is data cached, load request is sent to caching load master control component
Instruction;
It is slow according to updated data cached update after receiving the load instruction that the caching load master control component is sent
Deposit file;
After updating cache file success, transmission, which loads successfully to instruct to the caching, loads master control component;
After receiving the validation instructions that the caching load master control component is sent, the caching text that the soft link of file is directed toward is updated
Part path, the validation instructions are that the caching loads master control component after the load for receiving all charging assemblies successfully instructs
It issues.
A kind of data sharing device between multi-process, comprising:
Shared buffer memory file unit, for being mapped to institute for the CACHE DIRECTORY in application container each on same host
Under the same file folder for stating host;
Charging assembly is cached, for the data of the application cache of the application container carrying each on same host are raw
At cache file, there are under the file;
Map unit, the application for each application container carrying map the file by MMAP sharing mode
Under cache file;
The caching charging assembly is also used to after the application update that the application container carries is data cached, updates institute
It states the cache file under file and updates the cache file path that the soft link of file is directed toward;
Share update unit is cached, for applying in the case where accessing the file every time for each application container carrying
Cache file before, detection file it is soft link be directed toward cache file path whether change, if so, each application
Application in container discharges the mapping to cache file originally, and passes through to the cache file behind change cache file path
The mapping of MMAP sharing mode.
Optionally, the caching charging assembly, specifically includes:
Subelement is requested, for loading master control to caching after the application update that the application container carries is data cached
Component sends load request instruction;
Update subelement, for after receiving the load instruction that the caching load master control component is sent, foundation to update
Data cached update cache file afterwards;
Subelement is fed back, is used for after updating cache file success, transmission, which loads, successfully to be instructed to caching load always
Control component;
Come into force subelement, for updating file after receiving the validation instructions that the caching load master control component is sent
The cache file path that soft link is directed toward, the validation instructions are that caching load master control component is receiving all load groups
Part load successfully instruction after issue.
A kind of data-sharing systems between multi-process, including N number of above-mentioned data sharing device and a caching load master control
Component, each data sharing device are arranged in a host, and N is the integer not less than 2.
A kind of readable storage medium storing program for executing is stored thereon with computer program, real when the computer program is executed by processor
Now such as each step of above-mentioned data sharing method.
A kind of data sharing device, comprising: memory and processor;
The memory, for storing computer program;
The processor realizes each step such as above-mentioned data sharing method for executing the computer program.
Compared with prior art, technical solution of the present invention has the advantage that
Data sharing method and relevant apparatus between the multi-process that above-mentioned technical proposal provides, each of on same host
The application of application container carrying realizes shared buffer memory memory-based by MMAP sharing mode mapped cache file.It is same
Each application on host carries out data access using shared buffer memory memory-based, on the one hand, process accesses shared buffer memory
Data are convenient and efficient just as access stack memory or heap memory, improve the data exchange efficiency between process;On the other hand, altogether
Enjoy it is data cached in the physical memory of host only portion, effectively reduce the expense of system protocol stack, improve system
Overall performance.
Further, master control component is loaded by caching, each host shared buffer memory in distributed computing system
After data load successfully, so that shared buffer memory data uniformly come into force in each host, the slow of each calculate node ensure that
The consistency of deposit data.
Certainly, it implements any of the products of the present invention and does not necessarily require achieving all the advantages described above at the same time.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
The embodiment of invention for those of ordinary skill in the art without creative efforts, can also basis
The attached drawing of offer obtains other attached drawings.
Fig. 1 is the logical construction schematic diagram of the data sharing device between a kind of multi-process provided in an embodiment of the present invention;
Fig. 2 is a kind of schematic diagram of the data composition of cache file provided in an embodiment of the present invention;
Fig. 3 is the schematic illustration of the indexed mode of Hash+ chained list;
Fig. 4 is the schematic illustration of 3 dimension matrix+index chained list indexed modes;
Fig. 5 is the schematic illustration that 3 dimension matrix+n pitch tree+chained list indexed mode;
Fig. 6 is a kind of schematic diagram of the bibliographic structure of cache file provided in an embodiment of the present invention;
Fig. 7 is the logical construction schematic diagram of the data-sharing systems between a kind of multi-process provided in an embodiment of the present invention;
Fig. 8 is the flow diagram of the data sharing method between a kind of multi-process provided in an embodiment of the present invention;
Fig. 9 is a kind of schematic diagram of data sharing device provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other
Embodiment shall fall within the protection scope of the present invention.
Referring to the data sharing device that Fig. 1 is between a kind of multi-process provided in this embodiment, the data sharing device according to
Host 31 is disposed.The data sharing device includes: shared buffer memory file unit 11, caching charging assembly 12, mapping list
First (not shown) and caching Share update unit (not shown).
Shared buffer memory file unit 11, for mapping the CACHE DIRECTORY on same host 31 in each application container
Same file to the host 31 is pressed from both sides under F.Specifically, can by PaaS (abbreviation of Platfor m-as-a-Service,
Platform services) technology, such as K8s, the same file that the CACHE DIRECTORY in each application container is mapped to host is pressed from both sides
Under, thus realize the carrying of each application container using accessible identical file.That is, by every on same host
A application container is mounted on same file path, so that the application of each application container carrying may have access under this document path
All Files.
Charging assembly 12 is cached, the data of the application cache for carrying application container each on same host generate
There are under file F for cache file.Specifically, according to the Index Algorithm of the cache file set, generate comprising control information,
Cache file including data and index.It is formed referring to fig. 2 for the data of each cache file.TableHead (control information)
For the total number of records of record, index offset amount;Record (data) is the data structure of application definition, and mapping database table is fixed
Justice;The index data of Index (index) creation.
Index is that the rule and feature of data are accessed according to application service for efficient location data, can be according to data
Different dimensions, use different Index Algorithm to create index.The indexed mode of Hash+ chained list is suitble to institute's foundation index in need
Table.Referring to Fig. 3, the indexed mode of Hash+ chained list is specifically, one hash barrelage group of creation, the size of usual bucket are some
Very big prime number;Each element in array is referred to as entry, is directed toward a chained list.Using needing one in querying condition
A or multiple fields are linked to be a character string, are mapped that in an entry using hash algorithm, then insert data pointer
Enter in chained list.
3 dimension matrix+chained list indexed modes, suitable for being indexed creation to a 3 word code fields.Three word codes,
The particular code on the representative difference airport formulated by International Air Transport Association.Referring to fig. 4,3 dimension matrix+index chained list rope
Draw mode specifically, 3 character code fields in querying condition are established a 3 character code matrixes, each point in matrix, which is directed toward, to be met
The chained list of the data of the 3 character code field.3 character code matrixes are the hash matrixes of a 36*36*36, and each of matrix point claims
For an entry, it is directed toward an individual event chained list;Each single item record in chained list is directed toward the pointer of certain data.
3 dimension matrix+index chained list indexed modes, it is desirable that N in the key assignments key [N] of application must >=3, require simultaneously
First three character must be letter or number.Index Algorithm will use key [0], and [2] three characters of key [1], key carry out hash
Operation, hash algorithm can ensure that hash value is fallen on some point of 3 character code matrixes, data pointer are then inserted into chained list.It is above-mentioned
Described pointer refers both to offset.
3 dimension matrix+n pitch tree+chained list indexed mode, suitable for being indexed creation, one of word to two fields
Section meets 3 character code matrix requirements, another field is 3 capitalization English letters.Referring to Fig. 5,3 dimension matrix+n pitch tree+chained list
Indexed mode creates three-dimensional matrice (i.e. 3 character code matrixes) specifically, according to the field for meeting 3 character code matrix requirements, determines every
Record corresponding entry;And according to the character of another field, three layer of 26 fork tree is created, 3 characters are respectively 1,2, the 3 of tree
Layer, the leaf node of every tree are a chained list.
Charging assembly 12 is cached, is also used to after the application update that application container carries is data cached, under more new folder F
Cache file, and update file it is soft link be directed toward cache file path.The bibliographic structure of cache file is as shown in Figure 6.
Linux is soft, and link is exactly the soft link of file.Using update it is data cached after, Linux it is soft link by direction legacy data caching text
Part path is updated to point to the cache file path of new data.Update the process in the cache file path that the soft link of file is directed toward
The exactly updated data cached process to come into force.Caching charging assembly 12 is except needing to load total data off-balancesheet for the first time, in reality
In the operational process of border, often only a few tables of data change, therefore need to only load the tables of data of these variations, for it
Creation " hard link " is used only in the tables of data of his no change, without copying data file.
Map unit, it is slow under being pressed from both sides by MMAP sharing mode mapped file for the application of each application container carrying
Deposit file.MMAP is Linux system function, is a kind of method of Memory Mapping File;The present invention uses MAP_SHARED mode
Mapped cache file, so that multiple Application share portion memories.
Share update unit is cached, it is slow in the case where accessing the file every time for applying for each application container carrying
Before depositing file, whether the cache file path that the soft link of detection file is directed toward is changed, if so, each application container carries
Application discharge the mapping to cache file originally, and it is shared by MMAP to the cache file behind change cache file path
Mode maps.Munmap (releasing memory mapping) is Linux system function, for discharging mapping;In multiple Application share portions
It deposits, therefore, after the application release mapping of each application container carrying, memory just can be recycled really by operating system.
Using the minimum work execution unit that micro services component is in data access architecture.Using done in offer service
Each single item work can be abstracted as one and apply micro services component.Host 31 by container technique carry it is multiple application it is micro-
Serviced component, while supporting to provide data storage and file in a manner of book for the application micro services component of operation on it
Shared mechanism.
Cache file in order to realize multiple hosts is with uniformity, that is, shares identical file data, and the present invention mentions
The data-sharing systems between a kind of multi-process are supplied, shown in Figure 7, which includes N number of data sharing dress as described in Figure 1
It sets and loads master control component 21 with a caching, N is the integer not less than 2.Each data sharing device is arranged in a host
In 31.
Caching load master control component 21 is responsible for monitoring and managing the caching charging assembly 12 in all hosts 31.It is distributed
Under environment, more hosts 31 are online simultaneously, and every host 31 can all generate a cache file.At this point, being added using caching
It carries master control component 21 and keeps consistency data cached on all hosts.Caching load master control component 21 keeps all hosts
Upper data cached consistency includes two stages.First stage, caching load master control component 21 obtain all hosts by k8s
The IP of charging assembly 12 is cached on machine 31;Load instruction is sent to each caching charging assembly 12, and confirms all caching loads
12 data of component load successfully.Second stage, caching load master control component 21 send the finger that comes into force to each caching charging assembly 12
It enables, to notify that it is data cached that each caching charging assembly 12 comes into force.Due to the application cache in each host 31 data with
And update is data cached all the same, therefore, makes each caching charging assembly 12 each by caching load master control component 21
From after loading successfully, then carry out it is data cached come into force, guarantee consistency data cached on all hosts.
Caching charging assembly 12 includes request subelement, updates subelement, feeds back subelement and the subelement that comes into force.
Subelement is requested, for loading master control component to caching after the application update that application container carries is data cached
21 send load request instruction.
Subelement is updated, after instructing in the load for receiving the caching load transmission of master control component 21, after updating
Data cached update cache file.
Subelement is fed back, for after updating cache file success, transmission to load successfully instruction to caching load master control group
Part 21.
Come into force subelement, for updating the soft chain of file after receiving the validation instructions that caching load master control component is sent
Connect the cache file path of direction, validation instructions be caching load master control component 21 the load for receiving all charging assemblies at
It is issued after function instruction.
Referring to the data sharing method that Fig. 8 is between a kind of multi-process provided by the invention, the method comprising the steps of:
S81: by the CACHE DIRECTORY in application container each on same host, it is mapped to the same file folder F of host
Under.
S82: the data of the application cache of the application container carrying each on same host are generated into cache file and are deposited
At file F.
S83: the cache file under F is pressed from both sides in the application of each application container carrying by MMAP sharing mode mapped file.
S84: application container carrying application update it is data cached after, cache file under more new folder F, and updating
The cache file path that the soft link of file is directed toward.
S85: the application of each application container carrying, before the cache file under access file F every time, detection file is soft
Whether the cache file path that link is directed toward changes, if so, the application in each application container discharges originally to slow
The mapping of file is deposited, and the cache file behind change cache file path is mapped by MMAP sharing mode.
In order to realize the consistency of each host cache file, the application in application container carrying updates slow
After deposit data, the step of updating the cache file under the file and update the cache file path that the soft link of file is directed toward,
It specifically includes:
After the application update of application container carrying is data cached, load request is sent to caching load master control component
Instruction;
It is slow according to updated data cached update after receiving the load instruction that the caching load master control component is sent
Deposit file;
After updating cache file success, transmission, which loads successfully to instruct to the caching, loads master control component;
After receiving the validation instructions that the caching load master control component is sent, the caching text that the soft link of file is directed toward is updated
Part path, the validation instructions are that the caching loads master control component after the load for receiving all charging assemblies successfully instructs
It issues.
For the aforementioned method embodiment, for simple description, therefore, it is stated as a series of action combinations, still
Those skilled in the art should understand that the present invention is not limited by the sequence of acts described, because according to the present invention, it is certain
Step can be performed in other orders or simultaneously.
Data sharing device provided in an embodiment of the present invention can be applied to data sharing device, i.e. host, such as cloud is put down
Platform, server and server cluster etc..Server can be rack-mount server, blade server, tower server and
One or more of Cabinet-type server.Data sharing device in the present invention is that the data of installation linux operating system are total
Enjoy equipment.It is the schematic diagram of the preferred embodiment of data sharing device of the present invention referring to Fig. 9.The hardware knot of data sharing device
Structure may include: at least one processor 91, at least one communication interface 92, at least one processor 93 and at least one communication
Bus 94.
In embodiments of the present invention, processor 91, communication interface 92, memory 93, communication bus 94 quantity be at least
One, and processor 91, communication interface 92, memory 93 complete mutual communication by communication bus 94.
Processor 91 can be a CPU (Central Processing Unit, central processing in some embodiments
Device).
Communication interface 92 may include standard wireline interface and wireless interface (such as WI-FI interface).Commonly used in intelligence
Communication connection is established between terminal and other electronic equipments or system.
Memory 93 includes the readable storage medium storing program for executing of at least one type.Readable storage medium storing program for executing can for as flash memory, hard disk,
The NVM such as multimedia card, card-type memory (non-volatile memory, nonvolatile memory).Readable storage medium storing program for executing may be used also
To be high-speed RAM (random access memory, random access memory) memory.
Wherein, memory 93 is stored with computer program, and processor 91 can call the computer program of the storage of memory 93,
The computer program is used for:
By the CACHE DIRECTORY in application container each on same host, it is mapped to the same file folder of the host
Under;
The data of the application cache of the application container carrying each on same host are generated cache file, and there are institutes
It states under file;
The application of each application container carrying maps the cache file under the file by MMAP sharing mode;
Application container carrying application update it is data cached after, update the cache file under the file and more
The cache file path that the new soft link of file is directed toward;
Each application container carrying is applied before accessing the cache file under the file every time, and file is detected
Whether the cache file path that soft link is directed toward changes, if so, the release of the application in each application container is former
Carry out the mapping to cache file, and the cache file behind change cache file path is mapped by MMAP sharing mode.
The refinement function and extension function of described program can refer to above description.
The embodiment of the present invention also provides a kind of readable storage medium storing program for executing, which can be stored with and hold suitable for processor
Capable computer program, the computer program are used for:
By the CACHE DIRECTORY in application container each on same host, it is mapped to the same file folder of the host
Under;
The data of the application cache of the application container carrying each on same host are generated cache file, and there are institutes
It states under file;
The application of each application container carrying maps the cache file under the file by MMAP sharing mode;
Application container carrying application update it is data cached after, update the cache file under the file and more
The cache file path that the new soft link of file is directed toward;
Each application container carrying is applied before accessing the cache file under the file every time, and file is detected
Whether the cache file path that soft link is directed toward changes, if so, the release of the application in each application container is former
Carry out the mapping to cache file, and the cache file behind change cache file path is mapped by MMAP sharing mode.
The refinement function and extension function of described program can refer to above description.
Herein, the terms "include", "comprise" or any other variant thereof is intended to cover non-exclusive inclusion, from
And to include the process, method, article or equipments of a series of elements not only to include those elements, but also including not bright
The other element really listed, or further include for elements inherent to such a process, method, article, or device.Do not having
In the case where more limitations, the element that is limited by sentence "including a ...", it is not excluded that include the element process,
There is also other identical elements in method, article or equipment.
Each embodiment in this specification is described in a progressive manner, the highlights of each of the examples are with other
The difference of embodiment, the same or similar parts in each embodiment may refer to each other.
To the above description of disclosed embodiment of this invention, it can be realized professional and technical personnel in the field or use this
Invention.Various modifications to these embodiments will be readily apparent to those skilled in the art, institute herein
The General Principle of definition can be realized in other embodiments without departing from the spirit or scope of the present invention.Therefore,
The present invention will not be limited to the embodiments shown herein, and is to fit to special with principles disclosed herein and novelty
The consistent widest scope of point.
Claims (7)
1. the data sharing method between a kind of multi-process characterized by comprising
The CACHE DIRECTORY in application container each on same host is mapped under the same file folder of the host;
The data of the application cache of the application container carrying each on same host are generated cache file, and there are the texts
Under part folder;
The application of each application container carrying maps the cache file under the file by MMAP sharing mode;
After the application update of application container carrying is data cached, updates the cache file under the file and update text
The cache file path that the soft link of part is directed toward;
Each application container carrying is applied before accessing the cache file under the file every time, and the soft chain of file is detected
Whether the cache file path for connecing direction changes, if so, the release of the application in each application container is original right
The mapping of cache file, and the cache file behind change cache file path is mapped by MMAP sharing mode.
2. data sharing method according to claim 1, which is characterized in that the application in application container carrying
Update it is data cached after, update the cache file under the file and update the cache file path that the soft link of file is directed toward
Step specifically includes:
After the application update of application container carrying is data cached, load request is sent to caching load master control component and is referred to
It enables;
After receiving the load instruction that the caching load master control component is sent, according to updated data cached update caching text
Part;
After updating cache file success, transmission, which loads successfully to instruct to the caching, loads master control component;
After receiving the validation instructions that the caching load master control component is sent, the cache file road that the soft link of file is directed toward is updated
Diameter, the validation instructions are that caching load master control component issues after the load for receiving all charging assemblies successfully instructs
's.
3. the data sharing device between a kind of multi-process characterized by comprising
Shared buffer memory file unit, for being mapped to the place for the CACHE DIRECTORY in application container each on same host
Under the same file folder of host;
Charging assembly is cached, it is slow for generating the data of the application cache of the application container carrying each on same host
Depositing file, there are under the file;
Map unit, the application for each application container carrying are mapped under the file by MMAP sharing mode
Cache file;
The caching charging assembly is also used to after the application update that the application container carries is data cached, updates the text
The lower cache file of part folder and the cache file path for updating the soft link direction of file;
Share update unit is cached, it is slow in the case where accessing the file every time for applying for each application container carrying
Before depositing file, whether the cache file path that the soft link of detection file is directed toward is changed, if so, each application container
In application discharge the mapping to cache file originally, and it is total by MMAP to the cache file behind change cache file path
The mode of enjoying maps.
4. data sharing device according to claim 3, which is characterized in that the caching charging assembly specifically includes:
Subelement is requested, for loading master control component to caching after the application update that the application container carries is data cached
Send load request instruction;
Subelement is updated, after instructing in the load for receiving the caching load master control component transmission, according to updated
Data cached update cache file;
Subelement is fed back, loads master control group for after updating cache file success, sending to load successfully to instruct to the caching
Part;
Come into force subelement, for updating the soft chain of file after receiving the validation instructions that the caching load master control component is sent
The cache file path of direction is connect, the validation instructions are that caching load master control component is receiving all charging assemblies
It is issued after loading successfully instruction.
5. the data-sharing systems between a kind of multi-process, which is characterized in that total including N number of data as described in claim 3 or 4
Enjoy device and caching load master control component, each data sharing device is arranged in a host, N be not less than
2 integer.
6. a kind of readable storage medium storing program for executing, is stored thereon with computer program, which is characterized in that the computer program is by processor
When execution, each step of data sharing method as claimed in claim 1 or 2 is realized.
7. a kind of data sharing device characterized by comprising memory and processor;
The memory, for storing computer program;
The processor realizes data sharing method as claimed in claim 1 or 2 for executing the computer program
Each step.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910620883.9A CN110334069B (en) | 2019-07-10 | 2019-07-10 | Data sharing method among multiple processes and related device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910620883.9A CN110334069B (en) | 2019-07-10 | 2019-07-10 | Data sharing method among multiple processes and related device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110334069A true CN110334069A (en) | 2019-10-15 |
CN110334069B CN110334069B (en) | 2022-02-01 |
Family
ID=68146009
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910620883.9A Active CN110334069B (en) | 2019-07-10 | 2019-07-10 | Data sharing method among multiple processes and related device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110334069B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112269655A (en) * | 2020-10-15 | 2021-01-26 | 北京百度网讯科技有限公司 | Memory mapping file cleaning method and device, electronic equipment and storage medium |
CN113110944A (en) * | 2021-03-31 | 2021-07-13 | 北京达佳互联信息技术有限公司 | Information searching method, device, server, readable storage medium and program product |
CN114840356A (en) * | 2022-07-06 | 2022-08-02 | 山东矩阵软件工程股份有限公司 | Data processing method, data processing system and related device |
CN116107515A (en) * | 2023-04-03 | 2023-05-12 | 阿里巴巴(中国)有限公司 | Storage volume mounting and accessing method, equipment and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102163232A (en) * | 2011-04-18 | 2011-08-24 | 国电南瑞科技股份有限公司 | SQL (Structured Query Language) interface implementing method supporting IEC61850 object query |
CN105740413A (en) * | 2016-01-29 | 2016-07-06 | 珠海全志科技股份有限公司 | File movement method by FUSE on Linux platform |
CN108322307A (en) * | 2017-01-16 | 2018-07-24 | 中标软件有限公司 | Communication system and method between container based on kernel memory sharing |
CN109213571A (en) * | 2018-08-30 | 2019-01-15 | 北京百悟科技有限公司 | A kind of internal memory sharing method, Container Management platform and computer readable storage medium |
CN109274722A (en) * | 2018-08-24 | 2019-01-25 | 北京北信源信息安全技术有限公司 | Data sharing method, device and electronic equipment |
CN109298935A (en) * | 2018-09-06 | 2019-02-01 | 华泰证券股份有限公司 | A kind of method and application of the multi-process single-write and multiple-read without lock shared drive |
-
2019
- 2019-07-10 CN CN201910620883.9A patent/CN110334069B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102163232A (en) * | 2011-04-18 | 2011-08-24 | 国电南瑞科技股份有限公司 | SQL (Structured Query Language) interface implementing method supporting IEC61850 object query |
CN105740413A (en) * | 2016-01-29 | 2016-07-06 | 珠海全志科技股份有限公司 | File movement method by FUSE on Linux platform |
CN108322307A (en) * | 2017-01-16 | 2018-07-24 | 中标软件有限公司 | Communication system and method between container based on kernel memory sharing |
CN109274722A (en) * | 2018-08-24 | 2019-01-25 | 北京北信源信息安全技术有限公司 | Data sharing method, device and electronic equipment |
CN109213571A (en) * | 2018-08-30 | 2019-01-15 | 北京百悟科技有限公司 | A kind of internal memory sharing method, Container Management platform and computer readable storage medium |
CN109298935A (en) * | 2018-09-06 | 2019-02-01 | 华泰证券股份有限公司 | A kind of method and application of the multi-process single-write and multiple-read without lock shared drive |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112269655A (en) * | 2020-10-15 | 2021-01-26 | 北京百度网讯科技有限公司 | Memory mapping file cleaning method and device, electronic equipment and storage medium |
CN113110944A (en) * | 2021-03-31 | 2021-07-13 | 北京达佳互联信息技术有限公司 | Information searching method, device, server, readable storage medium and program product |
CN114840356A (en) * | 2022-07-06 | 2022-08-02 | 山东矩阵软件工程股份有限公司 | Data processing method, data processing system and related device |
CN114840356B (en) * | 2022-07-06 | 2022-11-01 | 山东矩阵软件工程股份有限公司 | Data processing method, data processing system and related device |
CN116107515A (en) * | 2023-04-03 | 2023-05-12 | 阿里巴巴(中国)有限公司 | Storage volume mounting and accessing method, equipment and storage medium |
CN116107515B (en) * | 2023-04-03 | 2023-08-18 | 阿里巴巴(中国)有限公司 | Storage volume mounting and accessing method, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN110334069B (en) | 2022-02-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11042311B2 (en) | Cluster system with calculation and storage converged | |
US11093159B2 (en) | Storage system with storage volume pre-copy functionality for increased efficiency in asynchronous replication | |
CN110334069A (en) | Data sharing method and relevant apparatus between multi-process | |
US10831735B2 (en) | Processing device configured for efficient generation of a direct mapped hash table persisted to non-volatile block memory | |
US10409781B2 (en) | Multi-regime caching in a virtual file system for cloud-based shared content | |
US10162828B2 (en) | Striping files across nodes of a distributed file system | |
CN101997918B (en) | Realization method of on-demand allocation of massive storage resources in heterogeneous SAN environment | |
US8290919B1 (en) | System and method for distributing and accessing files in a distributed storage system | |
US20170031945A1 (en) | Method and apparatus for on-disk deduplication metadata for a deduplication file system | |
US11126361B1 (en) | Multi-level bucket aggregation for journal destaging in a distributed storage system | |
US20110153606A1 (en) | Apparatus and method of managing metadata in asymmetric distributed file system | |
US10826990B2 (en) | Clustered storage system configured for bandwidth efficient processing of writes at sizes below a native page size | |
CN104881466B (en) | The processing of data fragmentation and the delet method of garbage files and device | |
US10831521B2 (en) | Efficient metadata management | |
US20130218934A1 (en) | Method for directory entries split and merge in distributed file system | |
US20140324917A1 (en) | Reclamation of empty pages in database tables | |
CN102693230B (en) | For the file system of storage area network | |
CN103020315A (en) | Method for storing mass of small files on basis of master-slave distributed file system | |
KR20210075845A (en) | Native key-value distributed storage system | |
US9772775B2 (en) | Scalable and efficient access to and management of data and resources in a tiered data storage system | |
US10169348B2 (en) | Using a file path to determine file locality for applications | |
CN106960011A (en) | Metadata of distributed type file system management system and method | |
CN116848517A (en) | Cache indexing using data addresses based on data fingerprints | |
US20150212847A1 (en) | Apparatus and method for managing cache of virtual machine image file | |
US20210034538A1 (en) | Volatile read cache in a content addressable storage system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |