Any software that runs malicious payloads on victims’ computers is referred to as malware. It is an increasing threat that costs people, businesses, and organizations a lot of money. Attacks on security have developed significantly in recent years. Malware may infiltrate both offline and online media, like: chat, SMS, and spam (email, or social media), because it has a built-in defensive mechanism and may conceal itself from antivirus software or even corrupt it. As a result, there is an urgent need to detect and prevent malware before it damages critical assets around the world. In fact, there are lots of different techniques and tools used to combat versus malware. In this paper, the malware samples were analyzing in the Virtual Box environment using in-depth analysis based on reverse engineering using advanced static malware analysis techniques. The results Obtained from malware analysis which represent a set of valuable information, all anti-malware and anti-virus program companies need for in order to update their products.
Authored by Maher Ismael, Karam Thanoon
A good ecological environment is crucial to attracting talents, cultivating talents, retaining talents and making talents fully effective. This study provides a solution to the current mainstream problem of how to deal with excellent employee turnover in advance, so as to promote the sustainable and harmonious human resources ecological environment of enterprises with a shortage of talents.This study obtains open data sets and conducts data preprocessing, model construction and model optimization, and describes a set of enterprise employee turnover prediction models based on RapidMiner workflow. The data preprocessing is completed with the help of the data statistical analysis software IBM SPSS Statistic and RapidMiner.Statistical charts, scatter plots and boxplots for analysis are generated to realize data visualization analysis. Machine learning, model application, performance vector, and cross-validation through RapidMiner s multiple operators and workflows. Model design algorithms include support vector machines, naive Bayes, decision trees, and neural networks. Comparing the performance parameters of the algorithm model from the four aspects of accuracy, precision, recall and F1-score. It is concluded that the performance of the decision tree algorithm model is the highest. The performance evaluation results confirm the effectiveness of this model in sustainable exploring of enterprise employee turnover prediction in human resource management.
Authored by Yong Shi
Bus factor is a metric that identifies how resilient is the project to the sudden engineer turnover. It states the minimal number of engineers that have to be hit by a bus for a project to be stalled. Even though the metric is often discussed in the community, few studies consider its general relevance. Moreover, the existing tools for bus factor estimation focus solely on the data from version control systems, even though there exists other channels for knowledge generation and distribution. With a survey of 269 engineers, we find that the bus factor is perceived as an important problem in collective development, and determine the highest impact channels of knowledge generation and distribution in software development teams. We also propose a multimodal bus factor estimation algorithm that uses data on code reviews and meetings together with the VCS data. We test the algorithm on 13 projects developed at JetBrains and compared its results to the results of the state-of-the-art tool by Avelino et al. against the ground truth collected in a survey of the engineers working on these projects. Our algorithm is slightly better in terms of both predicting the bus factor as well as key developers compared to the results of Avelino et al. Finally, we use the interviews and the surveys to derive a set of best practices to address the bus factor issue and proposals for the possible bus factor assessment tool.
Authored by Elgun Jabrayilzade, Mikhail Evtikhiev, Eray Tüzün, Vladimir Kovalenko
Objective measures are ubiquitous in the formulation, design and implementation of deep space missions. Tour durations, flyby altitudes, propellant budgets, power consumption, and other metrics are essential to developing and managing NASA missions. But beyond the simple metrics of cost and workforce, it has been difficult to identify objective, quantitative measures that assist in evaluating choices made during formulation or implementation phases in terms of their impact on flight operations. As part of the development of the Europa Clipper Mission system, a set of operations metrics have been defined along with the necessary design information and software tooling to calculate them. We have applied these methods and metrics to help assess the impact to the flight team on the six options for the Clipper Tour that are currently being vetted for selection in the fall of 2021. To generate these metrics, the Clipper MOS team first designed the set of essential processes by which flight operations will be conducted, using a standard approach and template to identify (among other aspects) timelines for each process, along with their time constraints (e.g., uplinks for sequence execution). Each of the resulting 50 processes is documented in a common format and concurred by stakeholders. Process timelines were converted into generic schedules and workforce-loaded using COTS scheduling software, based on the inputs of the process authors and domain experts. Custom code was generated to create an operations schedule for a specific portion of Clipper's prime mission, with instances of a given process scheduled based on specific timing rules (e.g., process X starts once per week on Thursdays) or relative to mission events (e.g., sequence generation process begins on a Monday, at least three weeks before each Europa closest approach). Over a 5-month period, and for each of six Clipper candidate tours, the result was a 20,000+ line, workforce-loaded schedule that documents all of the process-driven work effort at the level of individual roles, along with a significant portion of the level-of-effort work. Post-processing code calculated the absolute and relative number of work hours during a nominal 5 day / 40 hour work week, the work effort during 2nd and 3rd shift, as well as 1st shift on weekends. The resultant schedules and shift tables were used to generate objective measures that can be related to both human factors and to operational risk and showed that Clipper tours which utilize 6:1 resonant (21.25 day) orbits instead of 4:1 resonant (14.17 day) orbits during the first dozen or so Europa flybys are advantageous to flight operations. A similar approach can be extended to assist missions in more objective assessments of a number of mission issues and trades, including tour selection and spacecraft design for operability.
Authored by Duane Bindschadler, Nari Hwangpo, Marc Sarrel
The security of Energy Data collection is the basis of achieving reliability and security intelligent of smart grid. The newest security communication of Data collection is Zero Trust communication; The Strategy of Zero Trust communication is that don’t trust any device of outside or inside. Only that device authenticate is successful and software and hardware is more security, the Energy intelligent power system allow the device enroll into network system, otherwise deny these devices. When the device has been communicating with the Energy system, the Zero Trust still need to detect its security and vulnerability, if device have any security issue or vulnerability issue, the Zero Trust deny from network system, it ensures that Energy power system absolute security, which lays a foundation for the security analysis of intelligent power unit.
Authored by Yan Chen, Xingchen Zhou, Jian Zhu, Hongbin Ji
How can high-level directives concerning risk, cybersecurity and compliance be operationalized in the central nervous system of any organization above a certain complexity? How can the effectiveness of technological solutions for security be proven and measured, and how can this technology be aligned with the governance and financial goals at the board level? These are the essential questions for any CEO, CIO or CISO that is concerned with the wellbeing of the firm. The concept of Zero Trust (ZT) approaches information and cybersecurity from the perspective of the asset to be protected, and from the value that asset represents. Zero Trust has been around for quite some time. Most professionals associate Zero Trust with a particular architectural approach to cybersecurity, involving concepts such as segments, resources that are accessed in a secure manner and the maxim “always verify never trust”. This paper describes the current state of the art in Zero Trust usage. We investigate the limitations of current approaches and how these are addressed in the form of Critical Success Factors in the Zero Trust Framework developed by ON2IT ‘Zero Trust Innovators’ (1). Furthermore, this paper describes the design and engineering of a Zero Trust artefact that addresses the problems at hand (2), according to Design Science Research (DSR). The last part of this paper outlines the setup of an empirical validation trough practitioner oriented research, in order to gain a broader acceptance and implementation of Zero Trust strategies (3). The final result is a proposed framework and associated technology which, via Zero Trust principles, addresses multiple layers of the organization to grasp and align cybersecurity risks and understand the readiness and fitness of the organization and its measures to counter cybersecurity risks.
Authored by Yuri Bobbert, Jeroen Scheerder
Internet of Things (IoT) evolution calls for stringent communication demands, including low delay and reliability. At the same time, wireless mesh technology is used to extend the communication range of IoT deployments, in a multi-hop manner. However, Wireless Mesh Networks (WMNs) are facing link failures due to unstable topologies, resulting in unsatisfied IoT requirements. Named-Data Networking (NDN) can enhance WMNs to meet such IoT requirements, thanks to the content naming scheme and in-network caching, but necessitates adaptability to the challenging conditions of WMNs.In this work, we argue that Software-Defined Networking (SDN) is an ideal solution to fill this gap and introduce an integrated SDN-NDN deployment over WMNs involving: (i) global view of the network in real-time; (ii) centralized decision making; and (iii) dynamic NDN adaptation to network changes. The proposed system is deployed and evaluated over the wiLab.1 Fed4FIRE+ test-bed. The proof-of-concept results validate that the centralized control of SDN effectively supports the NDN operation in unstable topologies with frequent dynamic changes, such as the WMNs.
Authored by Sarantis Kalafatidis, Vassilis Demiroglou, Lefteris Mamatas, Vassilis Tsaoussidis
This paper proposes an improved version of the newly developed Honey Badger Algorithm (HBA), called Generalized opposition Based-Learning HBA (GOBL-HBA), for solving the mesh routers placement problem. The proposed GOBLHBA is based on the integration of the generalized opposition-based learning strategy into the original HBA. GOBL-HBA is validated in terms of three performance metrics such as user coverage, network connectivity, and fitness value. The evaluation is done using various scenarios with different number of mesh clients, number of mesh routers, and coverage radius values. The simulation results revealed the efficiency of GOBL-HBA when compared with the classical HBA, Genetic Algorithm (GA), and Particle Swarm optimization (PSO).
Authored by Sylia Taleb, Yassine Meraihi, Seyedali Mirjalili, Dalila Acheli, Amar Ramdane-Cherif, Asma Gabis
Mesh networks based on the wireless local area network (WLAN) technology, as specified by the standards amendment IEEE 802.11s, provide for a flexible and low-cost interconnection of devices and embedded systems for various use cases. To assess the real-world performance of WLAN mesh networks and potential optimization strategies, suitable testbeds and measurement tools are required. Designed for highly automated transport-layer throughput and latency measurements, the software FLExible Network Tester (Flent) is a promising candidate. However, so far Flent does not integrate information specific to IEEE 802.11s networks, such as peer link status data or mesh routing metrics. Consequently, we propose Flent extensions that allow to additionally capture IEEE 802.11s information as part of the automated performance tests. For the functional validation of our extensions, we conduct Flent measurements in a mesh mobility scenario using the network emulation framework Mininet-WiFi.
Authored by Michael Rethfeldt, Tim Brockmann, Richard Eckhardt, Benjamin Beichler, Lukas Steffen, Christian Haubelt, Dirk Timmermann
Intelligent Environments (IEs) enrich the physical world by connecting it to software applications in order to increase user comfort, safety and efficiency. IEs are often supported by wireless networks of smart sensors and actuators, which offer multi-year battery life within small packages. However, existing radio mesh networks suffer from high latency, which precludes their use in many user interface systems such as real-time speech, touch or positioning. While recent advances in optical networks promise low end-to-end latency through symbol-synchronous transmission, current approaches are power hungry and therefore cannot be battery powered. We tackle this problem by introducing BoboLink, a mesh network that delivers low-power and low-latency optical networking through a combination of symbol-synchronous transmission and a novel wake-up technology. BoboLink delivers mesh-wide wake-up in 1.13ms, with a quiescent power consumption of 237µW. This enables building-wide human computer interfaces to be seamlessly delivered using wireless mesh networks for the first time.
Authored by Mengyao Liu, Jonathan Oostvogels, Sam Michiels, Wouter Joosen, Danny Hughes
The “Internet of Things (IoT)” is a term that describes physical sensors, processing software, power and other technologies to connect or interchange information between systems and devices through the Internet and other forms of communication. RPL protocol can efficiently establish network routes, communicate routing information, and adjust the topology. The 6LoWPAN concept was born out of the belief that IP should protect even the tiniest devices, and for low-power devices, minimal computational capabilities should be permitted to join IoT. The DIS-Flooding against RPL-based IoT with its mitigation techniques are discussed in this paper.
Authored by Nisha, Akshaya Dhingra, Vikas Sindhu
The “Internet of Things” (IoT) is internetworking of physical devices known as 'things', algorithms, equipment and techniques that allow communication with another device, equipment and software over the network. And with the advancement in data communication, every device must be connected via the Internet. For this purpose, we use resource-constrained sensor nodes for collecting data from homes, offices, hospitals, industries and data centers. But various vulnerabilities may ruin the functioning of the sensor nodes. Routing Protocol for Low Power and Lossy Networks (RPL) is a standardized, secure routing protocol designed for the 6LoWPAN IoT network. It's a proactive routing protocol that works on the destination-oriented topology to perform safe routing. The Sinkhole is a networking attack that destroys the topology of the RPL protocol as the attacker node changes the route of all the traffic in the IoT network. In this paper, we have given a survey of Sinkhole attacks in IoT and proposed different methods for preventing and detecting these attacks in a low-power-based IoT network.
Authored by Jyoti Rani, Akshaya Dhingra, Vikas Sindhu
In the IoT (Internet of Things) domain, it is still a challenge to modify the routing behavior of IoT traffic at the decentralized backbone network. In this paper, centralized and flexible software-defined networking (SDN) is utilized to route the IoT traffic. The management of IoT data transmission through the SDN core network gives the chance to choose the path with the lowest delay, minimum packet loss, or hops. Therefore, fault-tolerant delay awareness routing is proposed for the emulated SDN-based backbone network to handle delay-sensitive IoT traffic. Besides, the hybrid form of GNS3 and Mininet-WiFi emulation is introduced to collaborate the SDN-based backbone network in GNS3 and the 6LoWPAN (IPv6 over Low Power Personal Area Network) sensor network in Mininet-WiFi.
Authored by May Han, Soe Htet, Lunchakorn Wuttisttikulkij
Artificial intelligence is a subfield of computer science that refers to the intelligence displayed by machines or software. The research has influenced the rapid development of smart devices that have a significant impact on our daily lives. Science, engineering, business, and medicine have all improved their prediction powers in order to make our lives easier in our daily tasks. The quality and efficiency of regions that use artificial intelligence has improved, as shown in this study. It successfully handles data organisation and environment difficulties, allowing for the development of a more solid and rigorous model. The pace of life is quickening in the digital age, and the PC Internet falls well short of meeting people’s needs. Users want to be able to get convenient network information services at any time and from any location
Authored by K. Thiagarajan, Chandra Dixit, M. Panneerselvam, C.Arunkumar Madhuvappan, Samata Gadde, Jyoti Shrote
The development of industrial robots, as a carrier of artificial intelligence, has played an important role in promoting the popularisation of artificial intelligence super automation technology. The paper introduces the system structure, hardware structure, and software system of the mobile robot climber based on computer big data technology, based on this research background. At the same time, the paper focuses on the climber robot's mechanism compound method and obstacle avoidance control algorithm. Smart home computing focuses on “home” and brings together related peripheral industries to promote smart home services such as smart appliances, home entertainment, home health care, and security monitoring in order to create a safe, secure, energy-efficient, sustainable, and comfortable residential living environment. It's been twenty years. There is still no clear definition of “intelligence at home,” according to Philips Inc., a leading consumer electronics manufacturer, which once stated that intelligence should comprise sensing, connectedness, learning, adaption, and ease of interaction. S mart applications and services are still in the early stages of development, and not all of them can yet exhibit these five intelligent traits.
Authored by Karrar Hussain, D. Vanathi, Bibin Jose, S Kavitha, Bhuvaneshwari Rane, Harpreet Kaur, C. Sandhya
Intelligent transportation systems, such as connected vehicles, are able to establish real-time, optimized and collision-free communication with the surrounding ecosystem. Introducing the internet of things (IoT) in connected vehicles relies on deployment of massive scale sensors, actuators, electronic control units (ECUs) and antennas with embedded software and communication technologies. Combined with the lack of designed-in security for sensors and ECUs, this creates challenges for security engineers and architects to identify, understand and analyze threats so that actions can be taken to protect the system assets. This paper proposes a novel STRIDE-based threat model for IoT sensors in connected vehicle networks aimed at addressing these challenges. Using a reference architecture of a connected vehicle, we identify system assets in connected vehicle sub-systems such as devices and peripherals that mostly involve sensors. Moreover, we provide a prioritized set of security recommendations, with consideration to the feasibility and deployment challenges, which enables practical applicability of the developed threat model to help specify security requirements to protect critical assets within the sensor network.
Authored by Sajib Kuri, Tarim Islam, Jason Jaskolka, Mohamed Ibnkahla
In the world of information technology and the Internet, which has become a part of human life today and is constantly expanding, Attention to the users' requirements such as information security, fast processing, dynamic and instant access, and costs savings has become essential. The solution that is proposed for such problems today is a technology that is called cloud computing. Today, cloud computing is considered one of the most essential distributed tools for processing and storing data on the Internet. With the increasing using this tool, the need to schedule tasks to make the best use of resources and respond appropriately to requests has received much attention, and in this regard, many efforts have been made and are being made. To this purpose, various algorithms have been proposed to calculate resource allocation, each of which has tried to solve equitable distribution challenges while using maximum resources. One of these calculation methods is the DRF algorithm. Although it offers a better approach than previous algorithms, it faces challenges, especially with time-consuming resource allocation computing. These challenges make the use of DRF more complex than ever in the low number of requests with high resource capacity as well as the high number of simultaneous requests. This study tried to reduce the computations costs associated with the DRF algorithm for resource allocation by introducing a new approach to using this DRF algorithm to automate calculations by machine learning and artificial intelligence algorithms (Autonomic Dominant Resource Fairness or A-DRF).
Authored by Amin Fakhartousi, Sofia Meacham, Keith Phalp
Vehicular Ad-hoc Networks (VANETs) is a very fast emerging research area these days due to their contribution in designing Intelligent transportation systems (ITS). ITS is a well-organized group of wireless networks. It is a derived class of Mobile Ad-hoc Networks (MANETs). VANET is an instant-formed ad-hoc network, due to the mobility of vehicles on the road. The goal of using ITS is to enhance road safety, driving comfort, and traffic effectiveness by alerting the drivers at right time about upcoming dangerous situations, traffic jams, road diverted, weather conditions, real-time news, and entertainment. We can consider Vehicular communication as an enabler for future driverless cars. For these all above applications, it is necessary to make a threat-free environment to establish secure, fast, and efficient communication in VANETs. In this paper, we had discussed the overviews, characteristics, securities, applications, and various data dissemination techniques in VANET.
Authored by Bhagwati Sharan, Megha Chhabra, Anil Sagar
Modern hardware systems are composed of a variety of third-party Intellectual Property (IP) cores to implement their overall functionality. Since hardware design is a globalized process involving various (untrusted) stakeholders, a secure management of the valuable IP between authors and users is inevitable to protect them from unauthorized access and modification. To this end, the widely adopted IEEE standard 1735-2014 was created to ensure confidentiality and integrity. In this paper, we outline structural weaknesses in IEEE 1735 that cannot be fixed with cryptographic solutions (given the contemporary hardware design process) and thus render the standard inherently insecure. We practically demonstrate the weaknesses by recovering the private keys of IEEE 1735 implementations from major Electronic Design Automation (EDA) tool vendors, namely Intel, Xilinx, Cadence, Siemens, Microsemi, and Lattice, while results on a seventh case study are withheld. As a consequence, we can decrypt, modify, and re-encrypt all allegedly protected IP cores designed for the respective tools, thus leading to an industry-wide break. As part of this analysis, we are the first to publicly disclose three RSA-based white-box schemes that are used in real-world products and present cryptanalytical attacks for all of them, finally resulting in key recovery.
Authored by Julian Speith, Florian Schweins, Maik Ender, Marc Fyrbiak, Alexander May, Christof Paar
Web browsers are among the most important but also complex software solutions to access the web. It is therefore not surprising that web browsers are an attractive target for attackers. Especially in the last decade, security researchers and browser vendors have developed sandboxing mechanisms like security-relevant HTTP headers to tackle the problem of getting a more secure browser. Although the security community is aware of the importance of security-relevant HTTP headers, legacy applications and individual requests from different parties have led to possible insecure configurations of these headers. Even if specific security headers are configured correctly, conflicts in their functionalities may lead to unforeseen browser behaviors and vulnerabilities. Recently, the first work which analyzed duplicated headers and conflicts in headers was published by Calzavara et al. at USENIX Security [1]. The authors focused on inconsistent protections by using both, the HTTP header X-Frame-Options and the framing protection of the Content-Security-Policy.We extend their work by analyzing browser behaviors when parsing duplicated headers, conflicting directives, and values that do not conform to the defined ABNF metalanguage specification. We created an open-source testbed running over 19,800 test cases, at which nearly 300 test cases are executed in the set of 66 different browsers. Our work shows that browsers conform to the specification and behave securely. However, all tested browsers behave differently when it comes, for example, to parsing the Strict-Transport-Security header. Moreover, Chrome, Safari, and Firefox behave differently if the header contains a character, which is not allowed by the defined ABNF. This results in the protection mechanism being fully enforced, partially enforced, or not enforced and thus completely bypassable.
Authored by Hendrik Siewert, Martin Kretschmer, Marcus Niemietz, Juraj Somorovsky
Modern web applications are getting more sophisticated by using frameworks that make development easy, but pose challenges for security analysis tools. New analysis techniques are needed to handle such frameworks that grow in number and popularity. In this paper, we describe Gelato that addresses the most crucial challenges for a security-aware client-side analysis of highly dynamic web applications. In particular, we use a feedback-driven and state-aware crawler that is able to analyze complex framework-based applications automatically, and is guided to maximize coverage of security-sensitive parts of the program. Moreover, we propose a new lightweight client-side taint analysis that outperforms the state-of-the-art tools, requires no modification to browsers, and reports non-trivial taint flows on modern JavaScript applications. Gelato reports vulnerabilities with higher accuracy than existing tools and achieves significantly better coverage on 12 applications of which three are used in production.
Authored by Behnaz Hassanshahi, Hyunjun Lee, Paddy Krishnan
Today, many internet-based applications, especially e-commerce and banking applications, require the transfer of personal data and sensitive data such as credit card information, and in this process, all operations are carried out over the Internet. Users frequently perform these transactions, which require high security, on web sites they access via web browsers. This makes the browser one of the most basic software on the Internet. The security of the communication between the user and the website is provided with SSL certificates, which is used for server authentication. Certificates issued by Certificate Authorities (CA) that have passed international audits must meet certain conditions. The criteria for the issuance of certificates are defined in the Baseline Requirements (BR) document published by the Certificate Authority/Browser (CA/B) Forum, which is accepted as the authority in the WEB Public Key Infrastructure (WEB PKI) ecosystem. Issuing the certificates in accordance with the defined criteria is not sufficient on its own to establish a secure SSL connection. In order to ensure a secure connection and confirm the identity of the website, the certificate validation task falls to the web browsers with which users interact the most. In this study, a comprehensive SSL certificate public key infrastructure (SSL Test Suite) was established to test the behavior of web browsers against certificates that do not comply with BR requirements. With the designed test suite, it is aimed to analyze the certificate validation behaviors of web browsers effectively.
Authored by Merve Şimşek, Tamer Ergun, Hüseyin Temuçin
A rendering regression is a bug introduced by a web browser where a web page no longer functions as users expect. Such rendering bugs critically harm the usability of web browsers as well as web applications. The unique aspect of rendering bugs is that they affect the presented visual appearance of web pages, but those web pages have no pre-defined correct appearance. Therefore, it is challenging to automatically detect errors in their appearance. In practice, web browser vendors rely on non-trivial and time-prohibitive manual analysis to detect and handle rendering regressions. This paper proposes R2Z2, an automated tool to find rendering regressions. R2Z2 uses the differential fuzz testing approach, which repeatedly compares the rendering results of two different versions of a browser while providing the same HTML as input. If the rendering results are different, R2Z2 further performs cross browser compatibility testing to check if the rendering difference is indeed a rendering regression. After identifying a rendering regression, R2Z2 will perform an in-depth analysis to aid in fixing the regression. Specifically, R2Z2 performs a delta-debugging-like analysis to pinpoint the exact browser source code commit causing the regression, as well as inspecting the rendering pipeline stages to pinpoint which pipeline stage is responsible. We implemented a prototype of R2Z2 particularly targeting the Chrome browser. So far, R2Z2 found 11 previously undiscovered rendering regressions in Chrome, all of which were confirmed by the Chrome developers. Importantly, in each case, R2Z2 correctly reported the culprit commit. Moreover, R2Z2 correctly pin-pointed the culprit rendering pipeline stage in all but one case.
Authored by Suhwan Song, Jaewon Hur, Sunwoo Kim, Philip Rogers, Byoungyoung Lee
Due to the rise of the internet a business model known as online advertising has seen unprecedented success. However, it has also become a prime method through which criminals can scam people. Often times even legitimate websites contain advertisements that are linked to scam websites since they are not verified by the website’s owners. Scammers have become quite creative with their attacks, using various unorthodox and inconspicuous methods such as I-frames, Favicons, Proxy servers, Domains, etc. Many modern Anti-viruses are paid services and hence not a feasible option for most users in 3rd world countries. Often people don’t possess devices that have enough RAM to even run such software efficiently leaving them without any options. This project aims to create a Browser extension that will be able to distinguish between safe and unsafe websites by utilizing Machine Learning algorithms. This system is lightweight and free thus fulfilling the needs of most people looking for a cheap and reliable security solution and allowing people to surf the internet easily and safely. The system will scan all the intermittent URL clicks as well, not just the main website thus providing an even greater degree of security.
Authored by Rehan Fargose, Samarth Gaonkar, Paras Jadhav, Harshit Jadiya, Minal Lopes
The study focused on assessing and testing Windows 10 to identify possible vulnerabilities and their ability to withstand cyber-attacks. CVE data, alongside other vulnerability reports, were instrumental in measuring the operating system's performance. Metasploit and Nmap were essential in penetration and intrusion experiments in a simulated environment. The study applied the following testing procedure: information gathering, scanning and results analysis, vulnerability selection, launch attacks, and gaining access to the operating system. Penetration testing involved eight attacks, two of which were effective against the different Windows 10 versions. Installing the latest version of Windows 10 did not guarantee complete protection against attacks. Further research is essential in assessing the system's vulnerabilities are recommending better solutions.
Authored by Jasmin Softić, Zanin Vejzović