Attribution (2014 Year in Review) Part 1

 

 
SoS Newsletter Logo

Attribution

(2014 Year in Review)

Part 1

 

Attribution of the source of an attack or the author of malware is a continuing problem in computer forensics.  The research presented here addresses a number of issues in each context published in 2014. 

 

 

Ya Zhang; Yi Wei; Jianbiao Ren, "Multi-touch Attribution in Online Advertising with Survival Theory," Data Mining (ICDM), 2014 IEEE International Conference on, pp. 687, 696, 14-17 Dec. 2014. doi: 10.1109/ICDM.2014.130

Abstract: Multi-touch attribution, which allows distributing the credit to all related advertisements based on their corresponding contributions, has recently become an important research topic in digital advertising. Traditionally, rule-based attribution models have been used in practice. The drawback of such rule-based models lies in the fact that the rules are not derived form the data but only based on simple intuition. With the ever enhanced capability to tracking advertisement and users' interaction with the advertisement, data-driven multi-touch attribution models, which attempt to infer the contribution from user interaction data, become an important research direction. We here propose a new data-driven attribution model based on survival theory. By adopting a probabilistic framework, one key advantage of the proposed model is that it is able to remove the presentation biases inherit to most of the other attribution models. In addition to model the attribution, the proposed model is also able to predict user's 'conversion' probability. We validate the proposed method with a real-world data set obtained from a operational commercial advertising monitoring company. Experiment results have shown that the proposed method is quite promising in both conversion prediction and attribution.

Keywords: Internet; advertising data processing; data handling; probability; commercial advertising monitoring company;d ata-driven multitouch attribution models; digital advertising; online advertising; probabilistic framework; rule-based attribution models; survival theory; user conversion probability prediction; user interaction data; Advertising; Data models; Gold; Hazards; Hidden Markov models; Kernel; Predictive models; Multi-touch attribution; Online Advertising; Survival theory   (ID#:15-3978)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7023386&isnumber=7023305

 

Rivera, J.; Hare, F., "The Deployment Of Attribution Agnostic Cyberdefense Constructs And Internally Based Cyberthreat Countermeasures," Cyber Conflict (CyCon 2014), 2014 6th International Conference On, pp.99, 116, 3-6 June 2014. doi: 10.1109/CYCON.2014.6916398

Abstract: Conducting active cyberdefense requires the acceptance of a proactive framework that acknowledges the lack of predictable symmetries between malicious actors and their capabilities and intent. Unlike physical weapons such as firearms, naval vessels, and piloted aircraft-all of which risk physical exposure when engaged in direct combat-cyberweapons can be deployed (often without their victims' awareness) under the protection of the anonymity inherent in cyberspace. Furthermore, it is difficult in the cyber domain to determine with accuracy what a malicious actor may target and what type of cyberweapon the actor may wield. These aspects imply an advantage for malicious actors in cyberspace that is greater than for those in any other domain, as the malicious cyberactor, under current international constructs and norms, has the ability to choose the time, place, and weapon of engagement. This being said, if defenders are to successfully repel attempted intrusions, then they must conduct an active cyberdefense within a framework that proactively engages threatening actions independent of a requirement to achieve attribution. This paper proposes that private business, government personnel, and cyberdefenders must develop a threat identification framework that does not depend upon attribution of the malicious actor, i.e., an attribution agnostic cyberdefense construct. Furthermore, upon developing this framework, network defenders must deploy internally based cyberthreat countermeasures that take advantage of defensive network environmental variables and alter the calculus of nefarious individuals in cyberspace. Only by accomplishing these two objectives can the defenders of cyberspace actively combat malicious agents within the virtual realm.

Keywords: security of data; active cyberdefense; anonymity protection; attribution agnostic cyberdefense constructs; cyber domain; cyberdefenders; cyberweapons; government personnel; internally based cyberthreat countermeasures; international constructs; international norms; malicious actor; physical weapons; private business; proactive framework; threat identification framework; Computer security; Cyberspace; Educational institutions; Government; Internet; Law; active defense;attribution agnostic cyberdefense construct; internally based cyberthreat countermeasures   (ID#:15-3979)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6916398&isnumber=6916383

 

Dirik, A.E.; Sencar, H.T.; Memon, N., "Analysis of Seam-Carving-Based Anonymization of Images Against PRNU Noise Pattern-Based Source Attribution," Information Forensics and Security, IEEE Transactions on, vol. 9, no.12, pp.2277, 2290, Dec. 2014. doi: 10.1109/TIFS.2014.2361200

Abstract: The availability of sophisticated source attribution techniques raises new concerns about privacy and anonymity of photographers, activists, and human right defenders who need to stay anonymous while spreading their images and videos. Recently, the use of seam-carving, a content-aware resizing method, has been proposed to anonymize the source camera of images against the well-known photoresponse nonuniformity (PRNU)-based source attribution technique. In this paper, we provide an analysis of the seam-carving-based source camera anonymization method by determining the limits of its performance introducing two adversarial models. Our analysis shows that the effectiveness of the deanonymization attacks depend on various factors that include the parameters of the seam-carving method, strength of the PRNU noise pattern of the camera, and an adversary's ability to identify uncarved image blocks in a seam-carved image. Our results show that, for the general case, there should not be many uncarved blocks larger than the size of $50times 50$ pixels for successful anonymization of the source camera.

Keywords: image coding; image denoising; PRNU noise pattern-based source attribution; content-aware resizing method; deanonymization attacks; image anonymization; photoresponse nonuniformity; seam-carving method; seam-carving-based anonymization; source attribution techniques; Cameras; Correlation; Image quality; Noise; Videos; PRNU noise pattern; anonymization; counter-forensics; de-anonymization attacks; seam-carving; source attribution   (ID#:15-3980)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6914598&isnumber=6953163

 

Tennyson, M.F.; Mitropoulos, F.J., "Choosing a Profile Length in the SCAP Method of Source Code Authorship Attribution," SOUTHEASTCON 2014, IEEE, pp. 1, 6, 13-16 March 2014. doi: 10.1109/SECON.2014.6950705

Abstract: Source code authorship attribution is the task of determining the author of source code whose author is not explicitly known. One specific method of source code authorship attribution that has been shown to be extremely effective is the SCAP method. This method, however, relies on a parameter L that has heretofore been quite nebulous. In the SCAP method, each candidate author's known work is represented as a profile of that author, where the parameter L defines the profile's maximum length. In this study, alternative approaches for selecting a value for L were investigated. Several alternative approaches were found to perform better than the baseline approach used in the SCAP method. The approach that performed the best was empirically shown to improve the performance from 91.0% to 97.2% measured as a percentage of documents correctly attributed using a data set consisting of 7,231 programs written in Java and C++.

Keywords: C++ language; Java; source code (software); C++ language; Java language; SCAP method; data set; profile length; source code authorship attribution; Frequency control; Frequency measurement; RNA; authorship attribution; information retrieval; plagiarism detection; software forensics   (ID#:15-3981)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6950705&isnumber=6950640

 

Shaobu Wang; Shuai Lu; Ning Zhou; Guang Lin; Elizondo, M.; Pai, M.A., "Dynamic-Feature Extraction, Attribution, and Reconstruction (DEAR) Method for Power System Model Reduction," Power Systems, IEEE Transactions on, vol. 29, no.5, pp.2049, 2059, Sept. 2014. doi: 10.1109/TPWRS.2014.2301032

Abstract: In interconnected power systems, dynamic model reduction can be applied to generators outside the area of interest (i.e., study area) to reduce the computational cost associated with transient stability studies. This paper presents a method of deriving the reduced dynamic model of the external area based on dynamic response measurements. The method consists of three steps, namely dynamic-feature extraction, attribution, and reconstruction (DEAR). In this method, a feature extraction technique, such as singular value decomposition (SVD), is applied to the measured generator dynamics after a disturbance. Characteristic generators are then identified in the feature attribution step for matching the extracted dynamic features with the highest similarity, forming a suboptimal “basis” of system dynamics. In the reconstruction step, generator state variables such as rotor angles and voltage magnitudes are approximated with a linear combination of the characteristic generators, resulting in a quasi-nonlinear reduced model of the original system. The network model is unchanged in the DEAR method. Tests on several IEEE standard systems show that the proposed method yields better reduction ratio and response errors than the traditional coherency based reduction methods.

Keywords: IEEE standards; cost reduction; dynamic response; electric generators ;feature extraction; power system dynamic stability; power system interconnection; power system transient stability; reduced order systems; DEAR Method; IEEE standard; characteristic generator state variable; computational cost reduction; dynamic feature extraction, attribution, and reconstruction method; dynamic response measurement; power system interconnection; power system model reduction; quasi-nonlinear reduced model; transient stability; Computational modeling; Feature extraction; Generators; Power system dynamics; Power system stability; Reduced order systems; Rotors; Dynamic response; feature extraction; model reduction; orthogonal decomposition; power systems   (ID#:15-3982)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6730699&isnumber=6879345

 

Pratanwanich, N.; Lio, P., "Who Wrote This? Textual Modeling with Authorship Attribution in Big Data," Data Mining Workshop (ICDMW), 2014 IEEE International Conference on, pp.645, 652, 14-14 Dec. 2014. doi: 10.1109/ICDMW.2014.140

Abstract: By representing large corpora with concise and meaningful elements, topic-based generative models aim to reduce the dimension and understand the content of documents. Those techniques originally analyze on words in the documents, but their extensions currently accommodate meta-data such as authorship information, which has been proved useful for textual modeling. The importance of learning authorship is to extract author interests and assign authors to anonymous texts. Author-Topic (AT) model, an unsupervised learning technique, successfully exploits authorship information to model both documents and author interests using topic representations. However, the AT model simplifies that each author has equal contribution on multiple-author documents. To overcome this limitation, we assumes that authors give different degrees of contributions on a document by using a Dirichlet distribution. This automatically transforms the unsupervised AT model to Supervised Author-Topic (SAT) model, which brings a novelty of authorship prediction on anonymous texts. The SAT model outperforms the AT model for identifying authors of documents written by either single authors or multiple authors with a better Receiver Operating Characteristic (ROC) curve and a significantly higher Area Under Curve (AUC). The SAT model not only achieves competitive performance to state-of-the-art techniques e.g. Random forests but also maintains the characteristics of the unsupervised models for information discovery i.e. Word distributions of topics, author interests, and author contributions.

Keywords: Big Data; meta data; text analysis; unsupervised learning; AUC; Big Data; Dirichlet distribution; ROC curve; SAT model; area under curve; author-topic model; authorship attribution; authorship learning; authorship prediction; dimension reduction; information discovery; meta-data; multiple-author documents; receiver operating characteristic curve; supervised author-topic model; textual modeling; topic representations; topic-based generative models; unsupervised AT model; unsupervised learning technique; Analytical models; Computational modeling; Data models; Mathematical model; Predictive models; Training; Vectors; Authorship attribution; Bayesian inference; High dimensional textual data; Information discovery; Probabilistic topic models   (ID#:15-3983)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7022657&isnumber=7022545

 

Marukatat, R.; Somkiadcharoen, R.; Nalintasnai, R.; Aramboonpong, T., "Authorship Attribution Analysis of Thai Online Messages," Information Science and Applications (ICISA), 2014 International Conference on, pp.1, 4, 6-9 May 2014. doi: 10.1109/ICISA.2014.6847369

Abstract: This paper presents a framework to identify the authors of Thai online messages. The identification is based on 53 writing attributes and the selected algorithms are support vector machine (SVM) and C4.5 decision tree. Experimental results indicate that the overall accuracies achieved by the SVM and the C4.5 were 79% and 75%, respectively. This difference was not statistically significant (at 95% confidence interval). As for the performance of identifying individual authors, in some cases the SVM was clearly better than the C4.5. But there were also other cases where both of them could not distinguish one author from another.

Keywords: decision trees; natural language processing; support vector machines; C4.5 decision tree; SVM; Thai online messages; author identification; authorship attribution analysis; support vector machine; writing attributes; Accuracy; Decision trees; Kernel; Support vector machines; Training; Training data; Writing   (ID#:15-3984)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847369&isnumber=6847317

 

Okuno, S.; Asai, H.; Yamana, H., "A Challenge Of Authorship Identification For Ten-Thousand-Scale Microblog Users," Big Data (Big Data), 2014 IEEE International Conference on, pp.52,54, 27-30 Oct. 2014. doi: 10.1109/BigData.2014.7004491

Abstract: Internet security issues require authorship identification for all kinds of internet contents; however, authorship identification for microblog users is much harder than other documents because microblog texts are too short. Moreover, when the number of candidates becomes large, i.e., big data, it will take long time to identify. Our proposed method solves these problems. The experimental results show that our method successfully identifies the authorship with 53.2% of precision out of 10,000 microblog users in the almost half execution time of previous method.

Keywords: Big Data; security of data; social networking (online);Internet security issues; authorship identification; big data; microblog texts; ten-thousand-scale microblog users; Big data; Blogs; Computers; Distance measurement; Internet; Security; Training; Twitter; authorship attribution; authorship detection; authorship identification; microblog   (ID#:15-3985)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7004491&isnumber=7004197

 

Yuying Wang; Xingshe Zhou, "Spatio-Temporal Semantic Enhancements For Event Model Of Cyber-Physical Systems," Signal Processing, Communications and Computing (ICSPCC), 2014 IEEE International Conference on, pp.813,818, 5-8 Aug. 2014. doi: 10.1109/ICSPCC.2014.6986310

Abstract: The newly emerging cyber-physical systems (CPS) discover events from multiple, distributed sources with multiple levels of detail and heterogeneous data format, which may not be compare and integrate, and turn to hardly combined determination for action. While existing efforts have mainly focused on investigating a uniform CPS event representation with spatio-temporal attributes, in this paper we propose a new event model with two-layer structure, Basic Event Model (BEM) and Extended Information Set (EIS). A BEM could be extended with EIS by semantic adaptor for spatio-temporal and other attribution enhancement. In particular, we define the event process functions, like event attribution extraction and composition determination, for CPS action trigger exploit the Complex Event Process (CEP) engine Esper. Examples show that such event model provides several advantages in terms of extensibility, flexibility and heterogeneous support, and lay the foundations of event-based system design in CPS.

Keywords: embedded systems; programming language semantics; BEM; CEP engine Esper; CPS; CPS event representation; EIS; attribution enhancement; basic event model; complex event process; composition determination; cyber-physical systems; event attribution extraction; event process functions; extended information set; multilevel heterogeneous embedded system; semantic adaptor; spatio-temporal attributes; spatio-temporal semantic enhancements; Adaptation models; Computational modeling; Data models; Observers; Semantics; Sensor phenomena and characterization; Complex Event Process; Cyber-physical systems; event modeling; event semantic; spatio-temporal event   (ID#:15-3986)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6986310&isnumber=6986138

 

Balakrishnan, R.; Parekh, R., "Learning to Predict Subject-Line Opens For Large-Scale Email Marketing," Big Data (Big Data), 2014 IEEE International Conference on, pp.579,584, 27-30 Oct. 2014. doi: 10.1109/BigData.2014.7004277

Abstract: Billions of dollars of services and goods are sold through email marketing. Subject lines have a strong influence on open rates of the e-mails, as the consumers often open e-mails based on the subject. Traditionally, the e-mail-subject lines are compiled based on the best assessment of the human editors. We propose a method to help the editors by predicting subject line open rates by learning from past subject lines. The method derives different types of features from subject lines based on Keywords, performance of past subject lines and syntax. Furthermore, we evaluate the contribution of individual subject-line Keywords to overall open rates based on an iterative method-namely Attribution Scoring - and use this for improved predictions. A random forest based model is trained to combine these features to predict the performance. We use a dataset of more than a hundred thousand different subject lines with many billions of impressions to train and test the method. The proposed method shows significant improvement in prediction accuracy over the baselines for both new as well as already used subject lines.

 Keywords: electronic mail; learning (artificial intelligence);marketing data processing; attribution scoring iterative method; human editors; large-scale e-mail marketing; open e-mail rates; performance prediction accuracy improvement; random forest based model training; subject line performance; subject line syntax; subject-line Keywords; subject-line open rate prediction learning; Accuracy; Business; Electronic mail; Feature extraction; Postal services; Predictive models; Weight measurement ;deals; email; learning; subject    (ID#:15-3987)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7004277&isnumber=7004197

 

Alsaleh, M.N.; Al-Shaer, E.A., "Security Configuration Analytics Using Video Games," Communications and Network Security (CNS), 2014 IEEE Conference on, pp. 256, 264, 29-31 Oct. 2014. doi: 10.1109/CNS.2014.6997493

Abstract: Computing systems today have a large number of security configuration settings that enforce security properties. However, vulnerabilities and incorrect configuration increase the potential for attacks. Provable verification and simulation tools have been introduced to eliminate configuration conflicts and weaknesses, which can increase system robustness against attacks. Most of these tools require special knowledge in formal methods and precise specification for requirements in special languages, in addition to their excessive need for computing resources. Video games have been utilized by researchers to make educational software more attractive and engaging. Publishing these games for crowdsourcing can also stimulate competition between players and increase the game educational value. In this paper we introduce a game interface, called NetMaze, that represents the network configuration verification problem as a video game and allows for attack analysis. We aim to make the security analysis and hardening usable and accurately achievable, using the power of video games and the wisdom of crowdsourcing. Players can easily discover weaknesses in network configuration and investigate new attack scenarios. In addition, the gameplay scenarios can also be used to analyze and learn attack attribution considering human factors. In this paper, we present a provable mapping from the network configuration to 3D game objects.

Keywords: computer games; courseware; formal verification ;human factors; security of data; specification languages; user interfaces; 3D game object; NetMaze; attack analysis; attack attribution; computing systems; configuration conflict; crowdsourcing; educational software; formal methods; game educational value; game interface; gameplay scenario; human factor; network configuration verification problem; provable mapping; provable verification; security analysis; security configuration analytics; security configuration settings; security property; simulation tool; special languages; system robustness; video games; vulnerability; Communication networks; Computational modeling; Conferences; Games; Network topology ;Security; Topology   (ID#:15-3988)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6997493&isnumber=6997445

 

Xiong Xu; Yanfei Zhong; Liangpei Zhang, "Adaptive Subpixel Mapping Based on a Multiagent System for Remote-Sensing Imagery," Geoscience and Remote Sensing, IEEE Transactions on, vol. 52, no. 2, pp.787, 804, Feb. 2014. doi: 10.1109/TGRS.2013.2244095

Abstract: The existence of mixed pixels is a major problem in remote-sensing image classification. Although the soft classification and spectral unmixing techniques can obtain an abundance of different classes in a pixel to solve the mixed pixel problem, the subpixel spatial attribution of the pixel will still be unknown. The subpixel mapping technique can effectively solve this problem by providing a fine-resolution map of class labels from coarser spectrally unmixed fraction images. However, most traditional subpixel mapping algorithms treat all mixed pixels as an identical type, either boundary-mixed pixel or linear subpixel, leading to incomplete and inaccurate results. To improve the subpixel mapping accuracy, this paper proposes an adaptive subpixel mapping framework based on a multiagent system for remote-sensing imagery. In the proposed multiagent subpixel mapping framework, three kinds of agents, namely, feature detection agents, subpixel mapping agents and decision agents, are designed to solve the subpixel mapping problem. Experiments with artificial images and synthetic remote-sensing images were performed to evaluate the performance of the proposed subpixel mapping algorithm in comparison with the hard classification method and other subpixel mapping algorithms: subpixel mapping based on a back-propagation neural network and the spatial attraction model. The experimental results indicate that the proposed algorithm outperforms the other two subpixel mapping algorithms in reconstructing the different structures in mixed pixels.

Keywords: geophysical image processing ;image classification; multi-agent systems; neural nets; remote sensing; adaptive subpixel mapping framework; adaptive subpixel mapping technique; artificial images; back-propagation neural network; boundary-mixed pixel; class abundance; class labels; coarser spectrally unmixed fraction images; decision agents; feature detection agent kinds; fine-resolution map; hard classification method; identical mixed pixel type; linear subpixel; mixed pixel problem; mixed pixel structure reconstruction; multiagent subpixel mapping framework; multiagent system; remote-sensing image classification; remote-sensing imagery; soft classification; spatial attraction model; spectral unmixing techniques; subpixel mapping accuracy; subpixel mapping agents; subpixel mapping algorithm performance; subpixel mapping problem; subpixel spatial attribution ;synthetic remote-sensing images; traditional subpixel mapping algorithms; Algorithm design and analysis; Feature extraction; Image reconstruction ;Multi-agent systems; Optimization; Remote sensing; Multiagent system; remote sensing; resolution enhancement; subpixel mapping; super-resolution mapping   (ID#:15-3989)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6479297&isnumber=6680673

 

Liu, J.N.K.; Yanxing Hu; You, J.J.; Yulin He, "An Advancing Investigation On Reduct And Consistency For Decision Tables In Variable Precision Rough Set Models," Fuzzy Systems (FUZZ-IEEE), 2014 IEEE International Conference on, pp.1496,1503, 6-11 July 2014. doi: 10.1109/FUZZ-IEEE.2014.6891766

Abstract: Variable Precision Rough Set (VPRS) model is one of the most important extensions of the Classical Rough Set (RS) theory. It employs a majority inclusion relation mechanism in order to make the Classical RS model become more fault tolerant, and therefore the generalization of the model is improved. This paper can be viewed as an extension of previous investigations on attribution reduction problem in VPRS model. In our investigation, we illustrated with examples that the previously proposed reduct definitions may spoil the hidden classification ability of a knowledge system by ignoring certian essential attributes in some circumstances. Consequently, by proposing a new β-consistent notion, we analyze the relationship between the structures of Decision Table (DT) and different definitions of reduct in VPRS model. Then we give a new notion of β-complement reduct that can avoid the defects of reduct notions defined in previous literatures. We also supply the method to obtain the β- complement reduct using a decision table splitting algorithm, and finally demonstrate the feasibility of our approach with sample instances.

Keywords: data integrity; data reduction; decision tables; pattern classification; rough set theory; β-complement reduct;β-consistent notion; VPRS model; attribution reduction problem; classical RS model; classical rough set theory; decision table splitting algorithm; decision table structures; hidden classification ability; majority inclusion relation mechanism; variable precision rough set model; Analytical models; Computational modeling; Educational institutions; Electronic mail; Fault tolerance; Fault tolerant systems; Mathematical model   (ID#:15-3990)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6891766&isnumber=6891523

 

Hauger, W.K.; Olivier, M.S., "The Role Of Triggers In Database Forensics," Information Security for South Africa (ISSA), 2014, pp. 1, 7, 13-14 Aug. 2014. doi: 10.1109/ISSA.2014.6950506

Abstract: An aspect of database forensics that has not received much attention in the academic research community yet is the presence of database triggers. Database triggers and their implementations have not yet been thoroughly analysed to establish what possible impact they could have on digital forensic analysis methods and processes. Conventional database triggers are defined to perform automatic actions based on changes in the database. These changes can be on the data level or the data definition level. Digital forensic investigators might thus feel that database triggers do not have an impact on their work. They are simply interrogating the data and metadata without making any changes. This paper attempts to establish if the presence of triggers in a database could potentially disrupt, manipulate or even thwart forensic investigations. The database triggers as defined in the SQL standard were studied together with a number of database trigger implementations. This was done in order to establish what aspects might have an impact on digital forensic analysis. It is demonstrated in this paper that some of the current database forensic analysis methods are impacted by the possible presence of certain types of triggers in a database. Furthermore, it finds that the forensic interpretation and attribution processes should be extended to include the handling and analysis of database triggers if they are present in a database.

Keywords: SQL; digital forensics; meta data; SQL standard; attribution processes; data definition level; database forensics; database trigger analysis; database trigger handling; database triggers; digital forensic analysis methods; forensic interpretation; metadata; Databases; Dictionaries; Forensics; Irrigation; Monitoring; Reliability; database forensics; database triggers; digital forensic analysis; methods; processes   (ID#:15-3991)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6950506&isnumber=6950479

 

Jantsch, A.; Tammemae, K., "A Framework Of Awareness For Artificial Subjects," Hardware/Software Codesign and System Synthesis (CODES+ISSS), 2014 International Conference on, pp.1,3, 12-17 Oct. 2014. doi: 10.1145/2656075.2661644

Abstract: We review the concepts of environment-and self-models, semantic interpretation, semantic attribution, history, goals and expectations, prediction, and self-inspection, how they contribute to awareness and self-awareness, and how they contribute to improved robustness and sensibility of behavior. Researchers have for some time realized that a sense of “awareness” of many embedded systems' own situation is a facilitator for robust and dependable behaviour even under radical environmental changes and drastically diminished capabilities. This insight has recently led to a proliferation of work on self-awareness and other system properties such as self-organization, self-configuration, self-optimization, self-protection, self-healing, etc., which are sometimes subsumed under the term “self-*”.

Keywords: artificial intelligence; embedded systems; fault tolerant computing; optimisation; artificial subject awareness; embedded systems; environment model; self-awareness; self-healing; self-model; self-optimization; semantic attribution; semantic interpretation; Educational institutions; Engines; History; Monitoring; Predictive models; Robustness; Semantics   (ID#:15-3992)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6971836&isnumber=6971816

 

Lin Chen; Lu Zhou; Chunxue Liu; Quan Sun; Xiaobo Lu, "Occlusive Vehicle Tracking Via Processing Blocks In Markov Random Field," Progress in Informatics and Computing (PIC), 2014 International Conference on, pp.294,298, 16-18 May 2014. doi: 10.1109/PIC.2014.6972344

Abstract: The technology of vehicle video detecting and tracking has been playing an important role in the ITS (Intelligent Transportation Systems) field during recent years. The occlusion phenomenon among vehicles is one of the most difficult problems related to vehicle tracking. In order to handle occlusion, this paper proposes an effective solution that applied Markov Random Field (MRF) to the traffic images. The contour of the vehicle is firstly detected by using background subtraction, then numbers of blocks with vehicle's texture and motion information are filled inside each vehicle. We extract several kinds of information of each block to process the following tracking. As for each occlusive block two groups of clique functions in MRF model are defined, which represents spatial correlation and motion coherence respectively. By calculating each occlusive block's total energy function, we finally solve the attribution problem of occlusive blocks. The experimental results show that our method can handle occlusion problems effectively and track each vehicle continuously.

Keywords: Markov processes; image motion analysis; image texture; intelligent transportation systems; object detection; object tracking; video signal processing; ITS; MRF model; Markov random field; attribution problem; background subtraction; clique functions; information extraction; intelligent transportation systems; motion coherence; occlusion handling; occlusion phenomenon; occlusive block total energy function; occlusive vehicle tracking; processing blocks; spatial correlation; traffic images; vehicle contour; vehicle motion information; vehicle texture information; vehicle video detection; Image resolution; Markov random fields; Robustness; Tracking; Vectors; Vehicle detection; Vehicles; Markov Random Field (MRF); occlusion; vehicle detection; vehicle tracking   (ID#:15-3993)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6972344&isnumber=6972283


Note:



Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.