© University of Derby 2014

Recent Submissions

  • MARINE: Man-in-the-middle attack resistant trust model IN connEcted vehicles

    Ahmad, Farhan; Kurugollu, Fatih; Adnane, Asma; Hussain, Rasheed; Hussain, Fatima; University of Derby; Loughborough university; Innopolis University, Russia; API Delivery & Operations, Royal Bank of Canada, Toronto, Canada (IEEE, 2020-01-17)
    Vehicular Ad-hoc NETwork (VANET), a novel technology holds a paramount importance within the transportation domain due to its abilities to increase traffic efficiency and safety. Connected vehicles propagate sensitive information which must be shared with the neighbors in a secure environment. However, VANET may also include dishonest nodes such as Man-in-the-Middle (MiTM) attackers aiming to distribute and share malicious content with the vehicles, thus polluting the network with compromised information. In this regard, establishing trust among connected vehicles can increase security as every participating vehicle will generate and propagate authentic, accurate and trusted content within the network. In this paper, we propose a novel trust model, namely, Man-in-the-middle Attack Resistance trust model IN connEcted vehicles (MARINE), which identifies dishonest nodes performing MiTM attacks in an efficient way as well as revokes their credentials. Every node running MARINE system first establishes trust for the sender by performing multi-dimensional plausibility checks. Once the receiver verifies the trustworthiness of the sender, the received data is then evaluated both directly and indirectly. Extensive simulations are carried out to evaluate the performance and accuracy of MARINE rigorously across three MiTM attacker models and the bench-marked trust model. Simulation results show that for a network containing 35% MiTM attackers, MARINE outperforms the state of the art trust model by 15%, 18%, and 17% improvements in precision, recall and F-score, respectively.
  • Graph data modelling for genomic variants

    Anjum, Ashiq; Aizad, Sanna; University of Derby (IEEE, 2019)
    Genome variant analysis is performed on Variant Call Format (VCF) files. It can take days to process these files for genome analytics due to challenges such as loading the files for each user query and processing them to answer questions of interest. As data sizes grow, timely processing of this data is putting enormous pressure on the computational resources, leading to significant processing delays and may jeopardise the ultimate goal of bringing the genomic discoveries to masses. We believe this problem will not be solved until the underlying data structure to organise and process these files undergoes a transformation. To overcome this problem, we have proposed a graph based system to represent the data in VCF files. This allows the data to be loaded once in a graph model which is then subsequently queried and processed numerous times without any additional computational and data access penalties. This helps reduce data access time by giving a constant time access to any node and addresses performance and scalability challenges that have been a limiting factor for the mass scale adoption of genome analytics. It takes only 2ms to access any data node in our graph model and remains constant for any number of nodes.
  • Calibration approaches for higher order ambisonic microphones

    Middlicott, Charlie; Wiggins, Bruce; University of Derby; Sky Labs (Audio Engineering Society, 2019-10-08)
    Recent years have seen an increase in the capture and production of ambisonic material due to companies such as YouTube and Facebook utilizing ambisonics for spatial audio playback. Consequently, there is now a greater need for affordable high order microphone arrays due to this uptake in technology. This work details the development of a five-channel circular horizontal ambisonic microphone intended as a tool to explore various optimization techniques, focusing on capsule calibration & pre-processing approaches for unmatched capsules.
  • A case study on sound level monitoring and management at large-scale music festivals

    Hill, Adam J.; Kok, Marcel; Mulder, Johannes; Burton, Jon; Kociper, Alex; Berrios, Anthony; University of Derby; Murdoch University; dBcontrol; Gand Concert Sound (Institute of Acoustics, 2019-11)
    Sound level management at live events has been made immeasurably easier over the past decade or so through use of commercially-available sound level monitoring software. This paper details a study conducted at a large-scale multi-day music festival in Chicago, USA. The focus was twofold: first to explore how the use of noise monitoring software affects the mix level from sound engineers and second on how crowd size, density and distribution affect the mix level. Additionally, sound levels at various points in the audience were monitored to indicate audience sound exposure over the duration of the festival. Results are presented in relation to those from previous studies with key findings pointing towards recommendations for best practice.
  • The transparency of binaural auralisation using very high order circular harmonics

    Dring, Mark; Wiggins, Bruce; University of Derby (Institute of Acoustics, 2019-11)
    Ambisonics to binaural rendering has become the de facto format for processing and reproducing spatial sound scenes, but direct capture and software generated output is limited to low orders; limiting the accuracy of psycho-acoustic cues and therefore the illusion of a ‘real-world’ experience. Applying a practical method through the use of acoustic modelling software, this study examines the potential of using very high horizontal only Ambisonic orders (up to 31st) to binaural rendering. A novel approach to the scene capturing process is implemented to realise these very high orders for a reverberant space with head-tracking capabilities. A headphone based subjective test is conducted, evaluating specific attributes of a presented auditory scene to determine when a limit to the perceived auditory differences of varying orders has been reached.
  • Analysis and optimal design of a vibration isolation system combined with electromagnetic energy harvester

    Diala, Uchenna; Mofidian, SM Mahdi; Lang, Zi-Qiang; Bardaweel, Hamzeh; University of Sheffield (SAGE Publications, 2019-07-17)
    This work investigates a vibration isolation energy harvesting system and studies its design to achieve an optimal performance. The system uses a combination of elastic and magnetic components to facilitate its dual functionality. A prototype of the vibration isolation energy harvesting device is fabricated and examined experimentally. A mathematical model is developed using first principle and analyzed using the output frequency response function method. Results from model analysis show an excellent agreement with experiment. Since any vibration isolation energy harvesting system is required to perform two functions simultaneously, optimization of the system is carried out to maximize energy conversion efficiency without jeopardizing the system’s vibration isolation performance. To the knowledge of the authors, this work is the first effort to tackle the issue of simultaneous vibration isolation energy harvesting using an analytical approach. Explicit analytical relationships describing the vibration isolation energy harvesting system transmissibility and energy conversion efficiency are developed. Results exhibit a maximum attainable energy conversion efficiency in the order of 1%. Results suggest that for low acceleration levels, lower damping values are favorable and yield higher conversion efficiencies and improved vibration isolation characteristics. At higher acceleration, there is a trade-off where lower damping values worsen vibration isolation but yield higher conversion efficiencies.
  • Remarks on a family of complex polynomials

    Andrica, Dorin; Bagdasar, Ovidiu; University of Derby (University of Belgrade, 2019-10-30)
    Integral formulae for the coefficients of cyclotomic and polygonal polynomials were recently obtained in [2] and [3]. In this paper, we define and study a family of polynomials depending on an integer sequence m1, . . . , mn, . . . , and on a sequence of complex numbers z1, . . . , zn, . . . of modulus one. We investigate some particular instances such as: extended cyclotomic, extended polygonal-type, and multinomial polynomials, for which we obtain formulae for the coefficients. Some novel related integer sequences are also derived.
  • Charged-particle pseudorapidity density at mid-rapidity in p–Pb collisions at √sNN = 8.16 TeV

    Acharya, S.; Acosta, F.-T.; Adamová, D.; Adhya, S. P.; Adler, A.; Adolfsson, J.; Aggarwal, M. M.; Rinella, G. Aglieri; Agnello, M.; Ahammed, Z.; et al. (Springer Science, 2019-04-04)
    The pseudorapidity density of charged particles, dNch/dη, in p–Pb collisions has been measured at a centreof-mass energy per nucleon–nucleon pair of √sNN = 8.16 TeV at mid-seudorapidity for non-single-diffractive events. The results cover 3.6 units of pseudorapidity, |η| < 1.8. The dNch/dη value is 19.1 ± 0.7 at |η| < 0.5. This quantity divided by Npart /2 is 4.73±0.20, where Npart is the average number of participating nucleons, is 9.5% higher than the corresponding value for p–Pb collisions at √sNN = 5.02 TeV. Measurements are compared with models based on different mechanisms for particle production. All models agree within uncertainties with data in the Pb-going side, while HIJING overestimates, showing a symmetric behaviour, and EPOS underestimates the p-going side of the dNch/dη distribution. Saturation-based models reproduce the distributions well for η > −1.3. The dNch/dη is also measured for different centrality estimators, based both on the chargedparticle multiplicity and on the energy deposited in the ZeroDegree Calorimeters. A study of the implications of the large multiplicity fluctuations due to the small number of participants for systems like p–Pb in the centrality calculation for multiplicity-based estimators is discussed, demonstrating the advantages of determining the centrality with energy deposited near beam rapidity.
  • A software platform for noise reduction in sound sensor equipped drones

    Kang, Byungseok; University of Derby (IEEE, 2019-07-08)
    A flying drone provides multiple video capturing options for filming videos. Since a noise is generated by the propellers and rotors of a drone, the quality of sound in the recorded video is quite low. Large drones are used singly in missions while small ones are used in formations or swarms. The small drones are proving to be useful in civilian applications. These are effective with multiple drones. Consideration of small drones for the applications, such as group flight, entertainment, and signal emission lead to deployment of networked drones. To solve the noise problem and develop group display applications, a software platform for these issues in networked drones is proposed. Noise reduction combines the active noise control and spectral subtraction. In addition, drones form group displays for an entertainment and displaying application. We develop a small-scale testbed to measure the service quality of proposed platform. The experimental results show that the proposed noise reduction produces a speech signal with up to 67.5% similarity to the original signal. It outperforms active noise control and spectral subtraction with the similarities of 53.1% and 39.6%, respectively. We see that drone formation can form a group display to show messages effectively.
  • A location aware fast PMIPv6 for low latency wireless sensor networks

    Kang, Byungseok; University of Derby (IEEE, 2019-06-28)
    Recently, mobile sensor networks (MSN) have been actively studied due to the emergence of mobile sensors such as Robomote and robotic sensor agents (RSAs). The research on existing mobile sensor networks mainly focuses on solving the coverage hole, which is a problem that occurs in the existing stationary sensor network (SSN). These studies have disadvantages in that they cannot make the most use of the mobile ability given to the moving sensors. In order to solve this problem, there is a proposal for sensing a wider area than a fixed sensor network by giving the moving sensor continuous mobility. However, the research is still in the early stage, and communication path to the sink node and data transmission problems. In this paper, we propose a location-aware fast PMIPv6 (LA-FPMIPv6) protocol that enables efficient routing and data transmission in a mobile sensor network environment composed of mobile sensors with continuous mobility. In the proposed protocol, the fixed sensor is arranged with the moving sensor so that the fixed sensor transmits the sensing data to the sink node instead of the moving sensor. For performance evaluation, the LA-FPMIPv6 is compared with existing methods through mathematical analysis and computer simulation. The results of the performance evaluation show that the LA-FPMIPv6 effectively reduces the handover latency, signaling cost, and buffering cost compared with the conventional methods.
  • A validation of security determinants model for cloud adoption in Saudi organisations’ context

    Alassafi, Madini O.; Atlam, Hany F.; Alshdadi, Abdulrahman A.; Alzahrani, Abdullah I.; AlGhamdi, Rayed A.; Buhari, Seyed M.; University of Southampton (Springer, 2019-08-30)
    Governments across the world are starting to make a dynamic shift to cloud computing so as to increase efficiency. Although, the cloud technology brings various benefits for government organisations, including flexibility and low cost, adopting it with the existing system is not an easy task. In this regard, the most significant challenge to any government agency is security concern. Our previous study focused to identify security factors that influence decision of government organisations to adopt cloud. This research enhances the previous work by investigating on the impact of various independent security related factors on the adopted security taxonomy based on critical ratio, standard error and significance levels. Data was collected from IT and security experts in the government organisations of Saudi Arabia. The Analysis of Moment Structures (AMOS) tool was used in this research for data analysis. Critical ratio reveals the importance of Security Benefits, Risks and Awareness Taxonomies on cloud adoption. Also, most of the exogenous variables had strong and positive relationships with their fellow exogenous variables. In future, this taxonomy model can also be applied for studying the adoption of new IT innovations whose IT architecture is similar to that of the cloud.
  • An efficient security risk estimation technique for Risk-based access control model for IoT

    Atlam, Hany F.; Wills, Gary; University of Southampton (Elsevier, 2019-04-15)
    The need to increase information sharing in the Internet of Things (IoT) applications made the risk-based access control model to be the best candidate for both academic and com- mercial organizations. Risk-based access control model carries out a security risk analysis on the access request by using IoT contextual information to provide access decisions dy- namically. Unlike current static access control approaches that are based on predefined policies and give the same result in different situations, this model provides the required flexibility to access system resources and works well in unexpected conditions and situa- tions of the IoT system. One of the main issues to implement this model is to determine the appropriate risk estimation technique that is able to generate accurate and realistic risk values for each access request to determine the access decision. Therefore, this paper pro- poses a risk estimation technique which integrates the fuzzy inference system with expert judgment to assess security risks of access control operations in the IoT system. Twenty IoT security experts from inside and outside the UK were interviewed to validate the proposed risk estimation technique and build the fuzzy inference rules accurately. The proposed risk estimation approach was implemented and simulated using access control scenarios of the network router. In comparison with the existing fuzzy techniques, the proposed technique has demonstrated it produces precise and realistic values in evaluating security risks of access control operations in the IoT context.
  • A systems engineering hackathon – a methodology involving multiple stakeholders to progress conceptual design of a complex engineered product

    Saravi, S.; Joannou, D.; Kalawsky, R. S.; King, M. R. N.; Marr, I.; Hall, M.; Wright, P. C. J.; Ravindranath, R.; Hill, A.; Loughborough University (Institute of Electrical and Electronics Engineers (IEEE), 2018)
    This paper describes a novel hackathon-style system engineering process and its value as an agile approach to the rapid generation and development of early design concepts of complex engineered products–in this case a future aircraft. Complex product design typically requires a diverse range of stakeholders to arrive at a consensus of key decision criteria and design factors, which requires effective articulation and communication of information across traditional engineering and operational disciplines. The application of the methodology is highlighted by means of a case study inspired by Airbus where stakeholder involvement and internal collaboration among team members were essential to achieve a set of agreed goals. This paper shows that a hackathon grounded on systems engineering approaches and structured around the technical functions within an engineering company has the capability and capacity to communicate a coherent vision and rationale for the conceptual design of a complex engineered product. The hackathon method offers significant benefits to these stakeholders to better manage, prioritize, and decrease excessive complexities in the overall design process. A significant benefit of this agile process is that it can achieve useful results in a very short timeframe (i.e., 80% reduction), where it could take up to a year to accomplish compared with using current/regular internal methods.
  • Multiclass disease predictions based on integrated clinical and genomics datasets

    Anjum, Ashiq; Subhani, Moeez; University of Derby (IARIA, 2019-06-02)
    Clinical predictions using clinical data by computational methods are common in bioinformatics. However, clinical predictions using information from genomics datasets as well is not a frequently observed phenomenon in research. Precision medicine research requires information from all available datasets to provide intelligent clinical solutions. In this paper, we have attempted to create a prediction model which uses information from both clinical and genomics datasets. We have demonstrated multiclass disease predictions based on combined clinical and genomics datasets using machine learning methods. We have created an integrated dataset, using a clinical (ClinVar) and a genomics (gene expression) dataset, and trained it using instancebased learner to predict clinical diseases. We have used an innovative but simple way for multiclass classification, where the number of output classes is as high as 75. We have used Principal Component Analysis for feature selection. The classifier predicted diseases with 73% accuracy on the integrated dataset. The results were consistent and competent when compared with other classification models. The results show that genomics information can be reliably included in datasets for clinical predictions and it can prove to be valuable in clinical diagnostics and precision medicine.
  • Use of artificial intelligence to improve resilience and preparedness against adverse flood events

    Saravi, Sara; Kalawsky, Roy; Joannou, Demetrios; Rivas Casado, Monica; Fu, Guangtao; Meng, Fanlin; Loughborough University (MDPI AG, 2019-05-09)
    The main focus of this paper is the novel use of Artificial Intelligence (AI) in natural disaster, more specifically flooding, to improve flood resilience and preparedness. Different types of flood have varying consequences and are followed by a specific pattern. For example, a flash flood can be a result of snow or ice melt and can occur in specific geographic places and certain season. The motivation behind this research has been raised from the Building Resilience into Risk Management (BRIM) project, looking at resilience in water systems. This research uses the application of the state-of-the-art techniques i.e., AI, more specifically Machin Learning (ML) approaches on big data, collected from previous flood events to learn from the past to extract patterns and information and understand flood behaviours in order to improve resilience, prevent damage, and save lives. In this paper, various ML models have been developed and evaluated for classifying floods, i.e., flash flood, lakeshore flood, etc. using current information i.e., weather forecast in different locations. The analytical results show that the Random Forest technique provides the highest accuracy of classification, followed by J48 decision tree and Lazy methods. The classification results can lead to better decision-making on what measures can be taken for prevention and preparedness and thus improve flood resilience.
  • A model-based engineering methodology and architecture for resilience in systems-of-systems: a case of water supply resilience to flooding

    Joannou, Demetrios; Kalawsky, Roy; Saravi, Sara; Rivas Casado, Monica; Fu, Guangtao; Meng, Fanlin; Loughborough University (MDPI AG, 2019-03-08)
    There is a clear and evident requirement for a conscious effort to be made towards a resilient water system-of-systems (SoS) within the UK, in terms of both supply and flooding. The impact of flooding goes beyond the immediately obvious socio-aspects of disruption, cascading and affecting a wide range of connected systems. The issues caused by flooding need to be treated in a fashion which adopts an SoS approach to evaluate the risks associated with interconnected systems and to assess resilience against flooding from various perspectives. Changes in climate result in deviations in frequency and intensity of precipitation; variations in annual patterns make planning and management for resilience more challenging. This article presents a verified model-based system engineering methodology for decision-makers in the water sector to holistically, and systematically implement resilience within the water context, specifically focusing on effects of flooding on water supply. A novel resilience viewpoint has been created which is solely focused on the resilience aspects of architecture that is presented within this paper. Systems architecture modelling forms the basis of the methodology and includes an innovative resilience viewpoint to help evaluate current SoS resilience, and to design for future resilient states. Architecting for resilience, and subsequently simulating designs, is seen as the solution to successfully ensuring system performance does not suffer, and systems continue to function at the desired levels of operability. The case study presented within this paper demonstrates the application of the SoS resilience methodology on water supply networks in times of flooding, highlighting how such a methodology can be used for approaching resilience in the water sector from an SoS perspective. The methodology highlights where resilience improvements are necessary and also provides a process where architecture solutions can be proposed and tested
  • Integration and evaluation of QUIC and TCP-BBR in longhaul science data transfers

    Lopes, Raul H. C.; Franqueira, Virginia N. L.; Duncan, Rand; Jisc, Lumen House; University of Derby, College of Engineering and Technology; Brunel University London, College of Engineering, Design and Physical Sciences (EDP Sciences, 2019-09-17)
    Two recent and promising additions to the internet protocols are TCP-BBR and QUIC. BBR defines a congestion policy that promises a better control in TCP bottlenecks on long haul transfers and can also be used in the QUIC protocol. TCP-BBR is implemented in the Linux kernels above 4.9. It has been shown, however, to demand careful fine tuning in the interaction, for example, with the Linux Fair Queue. QUIC, on the other hand, replaces HTTP and TLS with a protocol on the top of UDP and thin layer to serve HTTP. It has been reported to account today for 7% of Google’s traffic. It has not been used in server-to-server transfers even if its creators see that as a real possibility. Our work evaluates the applicability and tuning of TCP-BBR and QUIC for data science transfers. We describe the deployment and performance evaluation of TCP-BBR and comparison with CUBIC and H-TCP in transfers through the TEIN link to Singaren (Singapore). Also described is the deployment and initial evaluation of a QUIC server. We argue that QUIC might be a perfect match in security and connectivity to base services that are today performed by the Xroot redirectors.
  • Privacy verification of photoDNA based on machine learning

    Nadeem, Muhammad Shahroz; Franqueira, Virginia N. L.; Zhai, Xiaojun; University of Derby, College of Engineering and Technology; University of Essex, School of Computer Science and Electronic Engineering (The Institution of Engineering and Technology (IET), 2019-10-09)
    PhotoDNA is a perceptual fuzzy hash technology designed and developed by Microsoft. It is deployed by all major big data service providers to detect Indecent Images of Children (IIOC). Protecting the privacy of individuals is of paramount importance in such images. Microsoft claims that a PhotoDNA hash cannot be reverse engineered into the original image; therefore, it is not possible to identify individuals or objects depicted in the image. In this chapter, we evaluate the privacy protection capability of PhotoDNA by testing it against machine learning. Specifically, our aim is to detect the presence of any structural information that might be utilized to compromise the privacy of the individuals via classification. Due to the widespread usage of PhotoDNA as a deterrent to IIOC by big data companies, ensuring its ability to protect privacy would be crucial. In our experimentation, we achieved a classification accuracy of 57.20%.This result indicates that PhotoDNA is resistant to machine-learning-based classification attacks.
  • First observation of an attractive interaction between a proton and a cascade baryon

    Acharya, S.; Adamová, D.; Adhya, S. P.; Adler, A.; Adolfsson, J.; Aggarwal, M. M.; Aglieri Rinella, G.; Agnello, M.; Agrawal, N.; Ahammed, Z.; et al. (American Physical Society (APS), 2019-09-13)
    This Letter presents the first experimental observation of the attractive strong interaction between a proton and a multistrange baryon (hyperon) Ξ−. The result is extracted from two-particle correlations of combined p−Ξ−⊕¯p−¯Ξ+ pairs measured in p−Pb collisions at √sNN=5.02 TeV at the LHC with ALICE. The measured correlation function is compared with the prediction obtained assuming only an attractive Coulomb interaction and a standard deviation in the range [3.6, 5.3] is found. Since the measured p−Ξ−⊕¯p−¯Ξ+ correlation is significantly enhanced with respect to the Coulomb prediction, the presence of an additional, strong, attractive interaction is evident. The data are compatible with recent lattice calculations by the HAL-QCD Collaboration, with a standard deviation in the range [1.8, 3.7]. The lattice potential predicts a shallow repulsive Ξ− interaction within pure neutron matter and this implies stiffer equations of state for neutron-rich matter including hyperons. Implications of the strong interaction for the modeling of neutron stars are discussed.
  • GORTS: genetic algorithm based on one-by-one revision of two sides for dynamic travelling salesman problems

    Xu, Xiaolong; Yuan, Hao; Matthew, Peter; Ray, Jeffrey; Bagdasar, Ovidiu; Trovati, Marcello; University of Derby; Nanjing University of Posts and Telecommunications, Nanjing, China; Edge Hill University, Ormskirk, UK (Springer, 2019-09-21)
    The dynamic travelling salesman problem (DTSP) is a natural extension of the standard travelling salesman problem, and it has attracted significant interest in recent years due to is practical applications. In this article, we propose an efficient solution for DTSP, based on a genetic algorithm (GA), and on the one-by-one revision of two sides (GORTS). More specifically, GORTS combines the global search ability of GA with the fast convergence feature of the method of one-by-one revision of two sides, in order to find the optimal solution in a short time. An experimental platform was designed to evaluate the performance of GORTS with TSPLIB. The experimental results show that the efficiency of GORTS compares favourably against other popular heuristic algorithms for DTSP. In particular, a prototype logistics system based on GORTS for a supermarket with an online map was designed and implemented. It was shown that this can provide optimised goods distribution routes for delivery staff, while considering real-time traffic information.

View more