In order to investigate the effect of island divertor on the peak heat load reduction in a tokamak, a new island divertor was developed and installed in J-TEXT tokamak. The engineering design takes into account the complexity of the device based on the physical design, and also needs to ensure the insulation performance of the coil. Before installing the coil, electromagnetic forces on conductors and thermal conditions were simulated, the electromagnetic force on the magnetic island divertor coil will not cause damage to the coil, and there will be no thermal failure behavior.
Authored by Haojie Chen, Bo Rao, Song Zhou, Yunfeng Liang, Yangbo Li, Zhengkang Ren, Feiyue Mao, Chuanxu Zhao, Shuhao Li, Bo Hu, Nengchao Wang, Yonghua Ding, Yuan Pan
A new pulse power system is being developed with the goal of generating up to 40T seed magnetic fields for increasing the fusion yield of indirect drive inertial confinement fusion (ICF) experiments on the National Ignition Facility. This pulser is located outside of the target chamber and delivers a current pulse to the target through a coaxial cable bundle and custom flex-circuit strip-lines integrated into a cryogenic target positioner. At the target, the current passes through a multi-turn solenoid wrapped around the outside of a hohlraum and is insulated with Kapton coating. A 11.33 uF capacitor, charged up to 40 kV and switched by spark-gap, drives up to 40 kA of current before the coil disassembles. A custom Python design optimization code was written to maximize peak magnetic field strength while balancing competing pulser, load and facility constraints. Additionally, using an institutional multi-physics code, ALE3D, simulations that include coil dynamics such as temperature dependent resistance, coil forces and motion, and magnetic diffusion were conducted for detailed analysis of target coils. First experiments are reported as well as comparisons with current modelling efforts.
Authored by E. Carroll, G. Bracamontes, K. Piston, G. James, C. Provencher, J. Javedani, W. Stygar, A. Povilus, S. Vonhof, D. Yanagisawa, P. Arnold
The MagNIF team at LLNL is developing a pulsed power platform to enable magnetized inertial confinement fusion and high energy density experiments at the National Ignition Facility. A pulsed solenoidal driver capable of premagnetizing fusion fuel to 40T is predicted to increase performance of indirect drive implosions. We have written a specialized Python code suite to support the delivery of a practical design optimized for target magnetization and risk mitigation. The code simulates pulsed power in parameterized system designs and converges to high-performance candidates compliant with evolving engineering constraints, such as scale, mass, diagnostic access, mechanical displacement, thermal energy deposition, facility standards, and component-specific failure modes. The physics resolution and associated computational costs of our code are intermediate between those of 0D circuit codes and 3D magnetohydrodynamic codes, to be predictive and support fast, parallel simulations in parameter space. Development of a reduced-order, physics-based target model is driven by high-resolution simulations in ALE3D (an institutional multiphysics code) and multi-diagnostic data from a commissioned pulser platform. Results indicate system performance is sensitive to transient target response, which should include magnetohydrodynamic expansion, resistive heating, nonlinear magnetic diffusion, and phase change. Design optimization results for a conceptual NIF platform are reported.
Authored by C. Provencher, A. Johnson, E. Carroll, A. Povilus, J. Javedani, W. Stygar, B. Kozioziemski, J. Moody, V. Tang
Inertial Confinement Fusion(ICF) uses the inertia of the substance itself to confine the nest-temperature thermonuclear fuel plasma to achieve thermonuclear fusion and obtain fusion energy. In the design of the local-volume ignition target capsule, the ignition zone and the main combustion zone are separated by heavy medium. The ignition zone is located in the center of the system (the part of the fusion combustion). The mass is small and can be compressed to high density and the overall temperature is raised to the ignition state (local-volume ignition). The temperature increase and density increase of the local volume ignition are relatively decoupled in time. The multi-step enhanced shock wave heats the fuel temperature drop, after which the collision effect accelerates the metal shell layer by layer, and uses the inertia of high-Z metal shell with a larger residual mass to achieve effective compression of the fuel areal after the driving source ends for a long time. Local volume ignition has the advantages of no need to reshape the radiation driving pulse, resistance to the influence of hot electrons, less demanding compression symmetry, and large combustion gain.
Authored by Pan Liu, Zhangchun Tang, Qiang Gao, Wenbin Xiong
We propose a methodology for the simulation of electrostatic confinement wells in transistors at cryogenic temperatures. This is considered in the context of 22-nm fully depleted silicon-on-insulator transistors due to their potential for imple-menting quantum bits in scalable quantum computing systems. To overcome thermal fluctuations and improve decoherence times in most quantum bit implementations, they must be operated at cryogenic temperatures. We review the dominant sources of electric field at these low temperatures, including material interface work function differences and trapped interface charges. Intrinsic generation and dopant ionisation are shown to be negligible at cryogenic temperatures when using a mode of operation suitable for confinement. We propose studying cryogenic electrostatic confinement wells in transistors using a finite-element model simulation, and decoupling carrier transport generated fields.
Authored by Conor Power, Robert Staszewski, Elena Blokhina
We show that a new type of dielectric cavity featuring deep sub-wavelength light confinement allows a significant speedup of all-optical signal processing functionalities, without compromising the energy efficiency. The effect is due to enhanced diffusion dynamics in an unconventional geometry.
Authored by Marco Saldutti, Yi Yu, Philip Kristensen, George Kountouris, Jesper Mørk
The pre-magnetization of inertial confinement fusion capsules is a promising avenue for reaching hotspot ignition, as the magnetic field reduces electron thermal conduction losses during hotspot formation. However, in order to reach high yields, efficient burn-up of the cold fuel is vital. Suppression of heat flows out of the hotspot due to magnetization can restrict the propagation of burn and has been observed to reduce yields in previous studies [1] . This work investigates the potential suppression of burn in a magnetized plasma utilizing the radiation-MHD code ‘Chimera’ in a planar geometry.. This code includes extended-MHD effects, such as the Nernst term, and a Monte-Carlo model for magnetized alpha particle transport and heating. We observe 3 distinct regimes of magnetized burn in 1D as initial magnetization is increased: thermal conduction driven; alpha driven; and suppressed burn. Field transport due to extended-MHD is also observed to be important, enhancing magnetization near the burn front. In higher dimensions, burn front instabilities have the potential to degrade burn even more severely. Magneto-thermal type instabilities (previously observed in laser-heated plasmas [2] ) are of particular interest in this problem.
Authored by S. O'Neill, B. Appelbe, J. Chittenden
The humidity in the air parameters has an impact on the characteristics of corona discharge, and the magnetic field also affects the electron movement of corona discharge. We build a constant humidity chamber and use a wire-mesh electrode device to study the effects of humidity and magnetic field on the discharge. The enhancement of the discharge by humidity is caused by the combination of water vapor molecules and ions generated by the discharge into hydrated ions. By building a “water flow channel” between the high voltage wire electrode and the ground mesh electrode, the ions can pass more smoothly, thereby enhanced discharge. The ions are subjected to the Lorentz force in the electromagnetic field environment, the motion state of the ions changes, and the larmor motion in the electromagnetic field increases the movement path, the collision between the gas molecules increases, and more charged particles are generated, which increases the discharge current. During the period, the electrons and ions generated by the ionization of the wire electrode leave the ionization zone faster, which reduces the inhibitory effect of the ion aggregation on the discharge and promotes the discharge.
Authored by Wendi Yang, Ming Zhang, Chuan Li, Zutao Wang, Menghan Xiao, Jiawei Li, Dingchen Li, Wei Zheng
Concurrency vulnerabilities caused by synchronization problems will occur in the execution of multi-threaded programs, and the emergence of concurrency vulnerabilities often cause great threats to the system. Once the concurrency vulnerabilities are exploited, the system will suffer various attacks, seriously affecting its availability, confidentiality and security. In this paper, we extract 839 concurrency vulnerabilities from Common Vulnerabilities and Exposures (CVE), and conduct a comprehensive analysis of the trend, classifications, causes, severity, and impact. Finally, we obtained some findings: 1) From 1999 to 2021, the number of concurrency vulnerabilities disclosures show an overall upward trend. 2) In the distribution of concurrency vulnerability, race condition accounts for the largest proportion. 3) The overall severity of concurrency vulnerabilities is medium risk. 4) The number of concurrency vulnerabilities that can be exploited for local access and network access is almost equal, and nearly half of the concurrency vulnerabilities (377/839) can be accessed remotely. 5) The access complexity of 571 concurrency vulnerabilities is medium, and the number of concurrency vulnerabilities with high or low access complexity is almost equal. The results obtained through the empirical study can provide more support and guidance for research in the field of concurrency vulnerabilities.
Authored by Lili Bo, Xing Meng, Xiaobing Sun, Jingli Xia, Xiaoxue Wu
With the rapid development of Internet Technology in recent years, the demand for security support for complex applications is becoming stronger and stronger. Intel Software Guard Extensions (Intel SGX) is created as an extension of Intel Systems to enhance software security. Intel SGX allows application developers to create so-called enclave. Sensitive application code and data are encapsulated in Trusted Execution Environment (TEE) by enclave. TEE is completely isolated from other applications, operating systems, and administrative programs. Enclave is the core structure of Intel SGX Technology. Enclave supports multi-threading. Thread Control Structure (TCS) stores special information for restoring enclave threads when entering or exiting enclave. Each execution thread in enclave is associated with a TCS. This paper analyzes and verifies the possible security risks of enclave under concurrent conditions. It is found that in the case of multithread concurrency, a single enclave cannot resist flooding attacks, and related threads also throw TCS exception codes.
Authored by Tong Zhang, Xiangjie Cui, Yichuan Wang, Yanning Du, Wen Gao
In the process of crowdsourced testing service, the intellectual property of crowdsourced testing has been faced with problems such as code plagiarism, difficulties in confirming rights and unreliability of data. Blockchain is a decentralized, tamper-proof distributed ledger, which can help solve current problems. This paper proposes an intellectual property right confirmation system oriented to crowdsourced testing services, combined with blockchain, IPFS (Interplanetary file system), digital signature, code similarity detection to realize the confirmation of crowdsourced testing intellectual property. The performance test shows that the system can meet the requirements of normal crowdsourcing business as well as high concurrency situations.
Authored by Song Huang, Zhen Yang, Changyou Zheng, Yang Wang, Jinhu Du, Yixian Ding, Jinyong Wan
Java locking is an essential functionality and tool in the development of applications and systems, and this is mainly because several modules may run in a synchronized way inside an application and these modules need a good coordination manner in order for them to run properly and in order to make the whole application or system stable and normal. As such, this paper focuses on comparing various Java locking mechanisms in order to achieve a better understanding of how these locks work and how to conduct a proper locking mechanism. The comparison of locks is made according to CPU usage, memory consumption, and ease of implementation indicators, with the aim of providing guidance to developers in choosing locks for different scenarios. For example, if the Pessimistic Locks are used in any program execution environment, i.e., whenever a thread obtains resources, it needs to obtain the lock first, which can ensure a certain level of data security. However, it will bring great CPU overhead and reduce efficiency. Also, different locks have different memory consumption, and developers are sometimes faced with the need to choose locks rationally with limited memory, or they will cause a series of memory problems. In particular, the comparison of Java locks is able to lead to a systematic classification of these locks and can help improve the understanding of the taxonomy logic of the Java locks.
Authored by Pinguo Huang, Min Fu
Server-side web applications are vulnerable to request races. While some previous studies of real-world request races exist, they primarily focus on the root cause of these bugs. To better combat request races in server-side web applications, we need a deep understanding of their characteristics. In this paper, we provide a complementary focus on race effects and fixes with an enlarged set of request races from web applications developed with Object-Relational Mapping (ORM) frameworks. We revisit characterization questions used in previous studies on newly included request races, distinguish the external and internal effects of request races, and relate requestrace fixes with concurrency control mechanisms in languages and frameworks for developing server-side web applications. Our study reveals that: (1) request races from ORM-based web applications share the same characteristics as those from raw-SQL web applications; (2) request races violating application semantics without explicit crashes and error messages externally are common, and latent request races, which only corrupt some shared resource internally but require extra requests to expose the misbehavior, are also common; and (3) various fix strategies other than using synchronization mechanisms are used to fix request races. We expect that our results can help developers better understand request races and guide the design and development of tools for combating request races.
Authored by Zhengyi Qiu, Shudi Shao, Qi Zhao, Hassan Khan, Xinning Hui, Guoliang Jin
Given the COVID-19 pandemic, this paper aims at providing a full-process information system to support the detection of pathogens for a large range of populations, satisfying the requirements of light weight, low cost, high concurrency, high reliability, quick response, and high security. The project includes functional modules such as sample collection, sample transfer, sample reception, laboratory testing, test result inquiry, pandemic analysis, and monitoring. The progress and efficiency of each collection point as well as the status of sample transfer, reception, and laboratory testing are all monitored in real time, in order to support the comprehensive surveillance of the pandemic situation and support the dynamic deployment of pandemic prevention resources in a timely and effective manner. Deployed on a cloud platform, this system can satisfy ultra-high concurrent data collection requirements with 20 million collections per day and a maximum of 5 million collections per hour, due to its advantages of high concurrency, elasticity, security, and manageability. This system has also been widely used in Jiangsu, Shaanxi provinces, for the prevention and control of COVID-19 pandemic. Over 100 million NAT data have been collected nationwide, providing strong informational support for scientific and reasonable formulation and execution of COVID-19 prevention plans.
Authored by Yushen Wang, Guang Yang, Tianwen Sun, Kai Yang, Changling Zheng
The exponential growth of IoT-type systems has led to a reconsideration of the field of database management systems in terms of storing and handling high-volume data. Recently, many real-time Database Management Systems(DBMS) have been developed to address issues such as security, managing concurrent access to stored data, and optimizing data query performance. This paper studies methods that allow to reduce the temporal validity range for common DBMS. The primary purpose of IoT edge devices is to generate data and make it available for machine learning or statistical algorithms. This is achieved inside the Knowledge Discovery in Databases process. In order to visualize and obtain critical Data Mining results, all the device-generated data must be made available as fast as possible for selection, preprocessing and data transformation. In this research we investigate if IoT edge devices can be used with common DBMS proper configured in order to access data fast instead of working with Real Time DBMS. We will study what kind of transactions are needed in large IoT ecosystems and we will analyze the techniques of controlling concurrent access to common resources (stored data). For this purpose, we built a series of applications that are able to simulate concurrent writing operations to a common DBMS in order to investigate the performance of concurrent access to database resources. Another important procedure that will be tested with the developed applications will be to increase the availability of data for users and data mining applications. This will be achieved by using field indexing.
Authored by Valentin Pupezescu, Marilena-Cătălina Pupezescu, Lucian-Andrei Perișoară
Deadlock is one of the critical problems in the message passing interface. At present, most techniques for detecting the MPI deadlock issue rely on exhausting all execution paths of a program, which is extremely inefficient. In addition, with the increasing number of wildcards that receive events and processes, the number of execution paths raises exponentially, further worsening the situation. To alleviate the problem, we propose a deadlock detection approach called SAMPI based on match-sets to avoid exploring execution paths. In this approach, a match detection rule is employed to form the rough match-sets based on Lazy Lamport Clocks Protocol. Then we design three refining algorithms based on the non-overtaking rule and MPI communication mechanism to refine the match-sets. Finally, deadlocks are detected by analyzing the refined match-sets. We performed the experimental evaluation on 15 various programs, and the experimental results show that SAMPI is really efficient in detecting deadlocks in MPI programs, especially in handling programs with many interleavings.
Authored by Shushan Li, Meng Wang, Hong Zhang
The SPECTRE family of speculative execution attacks has required a rethinking of formal methods for security. Approaches based on operational speculative semantics have made initial inroads towards finding vulnerable code and validating defenses. However, with each new attack grows the amount of microarchitectural detail that has to be integrated into the underlying semantics. We propose an alternative, lightweight and axiomatic approach to specifying speculative semantics that relies on insights from memory models for concurrency. We use the CAT modeling language for memory consistency to specify execution models that capture speculative control flow, store-to-load forwarding, predictive store forwarding, and memory ordering machine clears. We present a bounded model checking framework parameterized by our speculative CAT models and evaluate its implementation against the state of the art. Due to the axiomatic approach, our models can be rapidly extended to allow our framework to detect new types of attacks and validate defenses against them.
Authored by Hernán Ponce-de-Leon, Johannes Kinder
Early detection of conflict potentials around the community is vital for the Central Java Regional Police Department, especially in the Analyst section of the Directorate of Security Intelligence. Performance in carrying out early detection will affect the peace and security of the community. The performance of potential conflict detection activities can be improved using an integrated early detection information system by shortening the time after observation, report preparation, information processing, and analysis. Developed using Unified Process as a software life cycle, the obtained result shows the time-based performance variables of the officers are significantly improved, including observation time, report production, data finding, and document formatting.
Authored by Ardiawan Harisa, Rahmat Trinanda, Oki Candra, Hanny Haryanto, Indra Gamayanto, Budi Setiawan
Cloud computing is a unified management and scheduling model of computing resources. To satisfy multiple resource requirements for various application, edge computing has been proposed. One challenge of edge computing is cross-domain data security sharing problem. Ciphertext policy attribute-based encryption (CP-ABE) is an effective way to ensure data security sharing. However, many existing schemes focus on could computing, and do not consider the features of edge computing. In order to address this issue, we propose a cross-domain data security sharing approach for edge computing based on CP-ABE. Besides data user attributes, we also consider access control from edge nodes to user data. Our scheme first calculates public-secret key peer of each edge node based on its attributes, and then uses it to encrypt secret key of data ciphertext to ensure data security. In addition, our scheme can add non-user access control attributes such as time, location, frequency according to the different demands. In this paper we take time as example. Finally, the simulation experiments and analysis exhibit the feasibility and effectiveness of our approach.
Authored by Jiacong Li, Hang Lv, Bo Lei
The computing of smart devices at the perception layer of the power Internet of Things is often insufficient, and complex computing can be outsourced to server resources such as the cloud computing, but the allocation process is not safe and controllable. Under special constraints of the power Internet of Things such as multi-users and heterogeneous terminals, we propose a CP-ABE-based non-interactive verifiable computation model of perceptual layer data. This model is based on CP-ABE, NPOT, FHE and other relevant safety and verifiable theories, and designs a new multi-user non-interactive secure verifiable computing scheme to ensure that only users with the decryption key can participate in the execution of NPOT Scheme. In terms of the calculation process design of the model, we gave a detailed description of the system model, security model, plan. Based on the definition given, the correctness and safety of the non-interactive safety verifiable model design in the power Internet of Things environment are proved, and the interaction cost of the model is analyzed. Finally, it proves that the CP-ABE-based non-interactive verifiable computation model for the perceptual layer proposed in this paper has greatly improved security, applicability, and verifiability, and is able to meet the security outsourcing of computing in the power Internet of Things environment.
Authored by Jianming Zhao, Weiwei Miao, Zeng Zeng
SWIM (System Wide Information Management) has become the development direction of A TM (Air Traffic Management) system by providing interoperable services to promote the exchange and sharing of data among various stakeholders. The premise of data sharing is security, and the access control has become the key guarantee for the secure sharing and exchange. The CP-ABE scheme (Ciphertext Policy Attribute-Based Encryption) can realize one-to-many access control, which is suitable for the characteristics of SWIM environment. However, the combination of the existing CP-ABE access control and SWIM has following constraints. 1. The traditional single authority CP-ABE scheme requires unconditional trust in the authority center. Once the authority center is corrupted, the excessive authority of the center may lead to the complete destruction of system security. So, SWIM with a large user group and data volume requires multiple authorities CP-ABE when performing access control. 2. There is no unified management of users' data access records. Lack of supervision on user behavior make it impossible to effectively deter malicious users. 3. There are a certain proportion of lightweight data users in SWIM, such as aircraft, users with handheld devices, etc. And their computing capacity becomes the bottleneck of data sharing. Aiming at these issues above, this paper based on cloud-chain fusion basically proposes a multi-authority CP-ABE scheme, called the MOV ATM scheme, which has three advantages. 1. Based on a multi-cloud and multi-authority CP-ABE, this solution conforms to the distributed nature of SWIM; 2. This scheme provides outsourced computing and verification functions for lightweight users; 3. Based on blockchain technology, a blockchain that is maintained by all stakeholders of SWIM is designed. It takes user's access records as transactions to ensure that access records are well documented and cannot be tampered with. Compared with other schemes, this scheme adds the functions of multi-authority, outsourcing, verifiability and auditability, but do not increase the decryption cost of users.
Authored by Qing Wang, Lizhe Zhang, Xin Lu, Kenian Wang
At present, the ciphertext-policy attribute based encryption (CP-ABE) has been widely used in different fields of data sharing such as cross-border paperless trade, digital government and etc. However, there still exist some challenges including single point of failure, key abuse and key unaccountable issues in CP-ABE. To address these problems. We propose an accountable CP-ABE mechanism based on block chain system. First, we establish two authorization agencies MskCA and AttrVN(Attribute verify Network),where the MskCA can realize master key escrow, and the AttrVN manages and validates users' attributes. In this way, our system can avoid the single point of failure and improve the privacy of user attributes and security of keys. Moreover, in order to realize auditability of CP-ABE key parameter transfer, we introduce the did and record parameter transfer process on the block chain. Finally, we theoretically prove the security of our CP-ABE. Through comprehensive comparison, the superiority of CP-ABE is verified. At the same time, our proposed schemes have some properties such as fast decryption and so on.
Authored by Jingyi Wang, Cheng Huang, Yiming Ma, Huiyuan Wang, Chao Peng, HouHui Yu
Ensuring data rights, openness and transaction flow is important in today’s digital economy. Few scholars have studied in the area of data confirmation, it is only with the development of blockchain that it has started to be taken seriously. However, blockchain has open and transparent natures, so there exists a certain probability of exposing the privacy of data owners. Therefore, in this paper we propose a new measure of data confirmation based on Ciphertext-Policy Attribute-Base Encryption(CP-ABE). The information with unique identification of the data owner is embedded in the ciphertext of CP-ABE by paillier homomorphic encryption, and the data can have multiple sharers. No one has access to the plaintext during the whole confirmation process, which reduces the risk of source data leakage.
Authored by Lingyun Zhang, Yuling Chen, Xiaobin Qian
The data sharing is a helpful and financial assistance provided by CC. Information substance security also rises out of it since the information is moved to some cloud workers. To ensure the sensitive and important data; different procedures are utilized to improve access manage on collective information. Here strategies, Cipher text-policyattribute based encryption (CP-ABE) might create it very helpful and safe. The conventionalCP-ABE concentrates on information privacy only; whereas client's personal security protection is a significant problem as of now. CP-ABE byhidden access (HA) strategy makes sure information privacy and ensures that client's protection isn't exposed also. Nevertheless, the vast majority of the current plans are ineffectivein correspondence overhead and calculation cost. In addition, the vast majority of thismechanism takes no thought regardingabilityauthenticationor issue of security spillescapein abilityverificationstage. To handle the issues referenced over, a security protectsCP-ABE methodby proficient influenceauthenticationis presented in this manuscript. Furthermore, its privacy keys accomplish consistent size. In the meantime, the suggestedplan accomplishes the specific safetyin decisional n-BDHE issue and decisional direct presumption. The computational outcomes affirm the benefits of introduced method.
Authored by Rokesh Yarava, G.Rama Rao, Yugandhar Garapati, G.Charles Babu, Srisailapu Prasad
With the rapid innovation of cloud computing technologies, which has enhanced the application of the Internet of Things (IoT), smart health (s-health) is expected to enhance the quality of the healthcare system. However, s-health records (SHRs) outsourcing, storage, and sharing via a cloud server must be protected and users attribute privacy issues from the public domain. Ciphertext policy attribute-based encryption (CP-ABE) is the cryptographic primitive which is promising to provide fine-grained access control in the cloud environment. However, the direct application of traditional CP-ABE has brought a lot of security issues like attributes' privacy violations and vulnerability in the future by potential powerful attackers like side-channel and cold-bot attacks. To solve these problems, a lot of CP-ABE schemes have been proposed but none of them concurrently support partially policy-hidden and leakage resilience. Hence, we propose a new Smart Health Records Sharing Scheme that will be based on Partially Policy-Hidden CP-ABE with Leakage Resilience which is resilient to bound leakage from each of many secret keys per user, as well as many master keys, and ensure attribute privacy. Our scheme hides attribute values of users in both secret key and ciphertext which contain sensitive information in the cloud environment and are fully secure in the standard model under the static assumptions.
Authored by Edward Acheampong, Shijie Zhou, Yongjian Liao, Emmanuel Antwi-Boasiako, Isaac Obiri