Anonymous Authentication For Secure Data Stored On Cloud With Decentralized Access Control
Decentralized storage system for accessing data with anonymous authentication provides more secure user authentication, user revocation and prevents replay attacks. Access control is processed on decentralized KDCs it is being more secure for data encryption. Generated decentralized KDC's are then grouped by (KGC). Our system provides authentication for the user, in which only system authorized users are able to decrypt, view the stored information. User validations and access control scheme are introduced in decentralized, which is useful for preventing replay attacks and supports modification of data stored in the cloud. The access control scheme is gaining more attention because it is important that only approved users have access to valid examine. Our scheme prevents supports creation, replay attacks, reading and modify data stored in the cloud. We also address user revocation. The problems of validation, access control, privacy protection should be solved simultaneously.
The main objective of the work presented within this paper was to design and implement the system for twitter data analysis and visualization in R environment using. Our focus was to leverage existing big data processing frameworks with its storage and computational capabilities to support the analytical functions implemented in R language. We decided to build the backend on top of the Apache Hadoop framework including the Hadoop HDFS as a distributed filesystem and MapReduce as a distributed computation paradigm. RHadoop packages were then used to connect the R environment to the processing layer and to design and implement the analytical functions in a distributed manner. Visualizations were implemented on top of the solution as a RShiny application.
Recent expansion in surveillance systems has motivated research in soft biometrics that enable the unconstrained recognition of human faces. Comparative soft biometrics show superior recognition performance than categorical soft biometrics and have been the focus of several studies which have highlighted their ability for recognition and retrieval in constrained and unconstrained environments. These studies, however, only addressed face recognition for retrieval using human generated attributes, posing a question about the feasibility of automatically generating comparative labels from facial images. In this paper, we propose an approach for the automatic comparative labelling of facial soft biometrics. Furthermore, we investigate unconstrained human face recognition using these comparative soft biometrics in a human labelled gallery (and vice versa). Using a subset from the LFW dataset, our experiments show the efficacy of the automatic generation of comparative facial labels, highlighting the potential extensibility of the approach to other face recognition scenarios and larger ranges of attributes.
Towards More Accurate Iris Recognition Using Cross-Spectral Matching
Iris recognition systems are increasingly deployed for large-scale applications such as national ID programs which continue to acquire millions of iris images to establish identity among billions. However with the availability of variety of iris sensors that are deployed for the iris imaging under different illumination/environment, significant performance degradation is expected while matching such iris images acquired under two different domains (either sensor-specific or wavelength-specific). This paper develops a domain adaptation framework to address this problem and introduces a new algorithm using Markov random fields (MRF) model to significantly improve cross-domain iris recognition. The proposed domain adaptation framework based on the naive Bayes nearest neighbor classification uses a real-valued feature representation which is capable of learning domain knowledge. Our approach to estimate corresponding visible iris patterns from the synthesis of iris patches in the near infrared iris images achieves outperforming results for the crossspectral iris recognition. In this paper, a new class of bi-spectral iris recognition system that can simultaneously acquire visible and near infra-red images with pixel-to-pixel correspondences is proposed and evaluated. We present reproducible experimental results from three publicly available databases; PolyU crossspectral iris image database, IIITD CLI and UND database, and achieve outperforming results for the cross-sensor and crossspectral iris matching.
Speed Detection Camera System Using Image Processing Techniques On Video Streams
This paper, presents a new Speed Detection Camera System (SDCS) that is applicable as a radar alternative. SDCS uses several image processing techniques on video stream in online -captured from single camera- or offline mode, which makes SDCS capable of calculating the speed of moving objects avoiding the traditional radars' problems. SDCS offers an en-expensive alternative to traditional radars with the same accuracy or even better. SDCS processes can be divided into four successive phases; first phase is Objects detection phase. Which uses a hybrid algorithm based on combining an adaptive background subtraction technique with a three-frame differencing algorithm which ratifies the major drawback of using only adaptive background subtraction? The second phase is Objects tracking, which consists of three successive operations, Object segmentation, Object labelling, and Object canter extraction. Objects tracking operation takes into consideration the different possible scenarios of the moving object like; Simple tracking, object has left the scene, object hasentered the scene, object cross by another object, and object leaves and another one enters the scene. Third phase is speed calculation phase, which is calculated from the number of frames consumed by the object to pass-by the scene. The final phase is Capturing Object's Picture phase, which captures the image of objects that violate the speed limits. SDCS is implemented and tested in many experiments; it proved to have achieved a satisfactory performance.
An Improved Fatigue Detection System Based On Behavioral Characteristics Of Driver
The road accidents have increased significantly. One of the major reasons for these accidents, as reported is driver fatigue. Due to continuous and longtime driving, the driver gets exhausted and drowsy which may lead to an accident. Therefore, there is a need for a system to measure the fatigue level of driver and alert him when he/she feels drowsy to avoid accidents. Thus, we propose a system which comprises of a camera installed on the car dashboard. The camera detects the driver’s face and tracks its activity. From the driver’s face, the system observes the alteration in its facial features and uses these features to observe the fatigue level. Facial features include eyes (fast blinking or heavy eyes) and mouth (yawn detection). Principle Component Analysis (PCA) is thus implemented to reduce the features while minimizing the amount of information lost. The parameters thus obtained are processed through Support Vector Classifier (SVC) for classifying the fatigue level. After that classifier output is sent to the alert unit.