• D-Meson Azimuthal Anisotropy in Midcentral Pb-Pb Collisions at ffiffi s p NN = 5.02 TeV.

      ALICE Collaboration; Barnby, Lee; STFC Daresbury Laboratory (APS Physics, 2018-03-09)
      The azimuthal anisotropy coefficient v 2 of prompt D 0 , D + , D * + , and D + s mesons was measured in midcentral (30%–50% centrality class) Pb-Pb collisions at a center-of-mass energy per nucleon pair √ s N N = 5.02     TeV , with the ALICE detector at the LHC. The D mesons were reconstructed via their hadronic decays at midrapidity, | y | < 0.8 , in the transverse momentum interval 1 < p T < 24     GeV / c . The measured D -meson v 2 has similar values as that of charged pions. The D + s v 2 , measured for the first time, is found to be compatible with that of nonstrange D mesons. The measurements are compared with theoretical calculations of charm-quark transport in a hydrodynamically expanding medium and have the potential to constrain medium parameters.
    • D-meson production in p-Pb collisions at √sNN = 5.02 TeV and in pp collisions at √s = 7 TeV

      ALICE Collaboration; Barnby, Lee; European Organization for Nuclear Research (CERN); University of Birmingham (American Physical Society, 2016-11-23)
      Background: In the context of the investigation of the quark gluon plasma produced in heavy-ion collisions, hadrons containing heavy (charm or beauty) quarks play a special role for the characterization of the hot and dense medium created in the interaction. The measurement of the production of charm and beauty hadrons in proton– proton collisions, besides providing the necessary reference for the studies in heavy-ion reactions, constitutes an important test of perturbative quantum chromodynamics (pQCD) calculations. Heavy-flavor production in proton–nucleus collisions is sensitive to the various effects related to the presence of nuclei in the colliding system, commonly denoted cold-nuclear-matter effects. Most of these effects are expected to modify open-charm production at low transverse momenta (pT) and, so far, no measurement of D-meson production down to zero transverse momentum was available at mid-rapidity at the energies attained at the CERN Large Hadron Collider (LHC). Purpose: The measurements of the production cross sections of promptly produced charmed mesons in p-Pb collisions at the LHC down to pT = 0 and the comparison to the results from pp interactions are aimed at the assessment of cold-nuclear-matter effects on open-charm production, which is crucial for the interpretation of the results from Pb-Pb collisions. Methods: The prompt charmed mesons D0, D+, D∗+, and Ds+ were measured at mid-rapidity in p-Pb collisions at a center-of-mass energy per nucleon pair √sNN = 5.02 TeV with the ALICE detector at the LHC.D mesons were reconstructed from their decays D0 → K−π+, D+ → K−π+π+, D∗+ → D0π+, Ds+ → φπ+ → K−K+π+, and their charge conjugates, using an analysis method based on the selection of decay topologies displaced from the interaction vertex. In addition, the prompt D0 production cross section was measured in pp collisions at √s = 7 TeV and p-Pb collisions at √sNN = 5.02 TeV down to pT = 0 using an analysis technique that is based on the estimation and subtraction of the combinatorial background, without reconstruction of the D0 decay vertex. Results: The production cross section in pp collisions is described within uncertainties by different implementations of pQCD calculations down to pT = 0. This allowed also a determination of the total cc ̄ production cross section in pp collisions, which is more precise than previous ALICE measurements because it is not affected by uncertainties owing to the extrapolation to pT = 0. The nuclear modification factor RpPb(pT), defined as the ratio of the pT-differential D meson cross section in p-Pb collisions and that in pp collisions scaled by the mass number of the Pb nucleus, was calculated for the four D-meson species and found to be compatible with unity within uncertainties. The results are compared to theoretical calculations that include cold-nuclear-matter effects and to transport model calculations incorporating the interactions of charm quarks with an expanding deconfined medium. Conclusions: These measurements add experimental evidence that the modification of the D-meson transverse momentum distributions observed in Pb–Pb collisions with respect to pp interactions is due to strong final-state effects induced by the interactions of the charm quarks with the hot and dense partonic medium created in ultrarelativistic heavy-ion collisions. The current precision of the measurement does not allow us to draw conclusions on the role of the different cold-nuclear-matter effects and on the possible presence of additional hot-medium effects in p-Pb collisions. However, the analysis technique without decay-vertex reconstruction, applied on future larger data samples, should provide access to the physics-rich range down to pT = 0
    • Data aggregation in wireless sensor networks for lunar exploration

      Zhai, Xiaojun; Vladimirova, Tanya; University of Derby; University of Leicester (IEEE, 2015-09-03)
      This paper presents research work related to the development of Wireless Sensor Networks (WSN) gathering environmental data from the surface of the Moon. Data aggregation algorithms are applied to reduce the amount of the multi-sensor data collected by the WSN, which are to be sent to a Moon orbiter and later to Earth. A particular issue that is of utmost importance to space applications is energy efficiency and a main goal of the research is to optimise the algorithm design so that the WSN energy consumption is reduced. An extensive simulation experiment is carried out, which confirms that the use of the proposed algorithms enhances significantly the network performance in terms of energy consumption compared to routing the raw data. In addition, the proposed data aggregation algorithms are implemented successfully on a System-on-a-chip (SoC) embedded platform using a Xilinx Zynq FPGA device. The data aggregation has two important effects: the WSN life time is extended due to the saved energy and the original data accuracy is preserved. This research could be beneficial for a number of future security related applications, such as monitoring of phenomena that may affect Earth's planetary security/safety as well as monitoring the implementation of Moon treaties preventing establishment of military bases on the lunar surface.
    • Data aggregation with end-to-end confidentiality and integrity for large-scale wireless sensor networks.

      Cui, Jie; Shao, Lili; Zhong, Hong; Xu, Yan; Liu, Lu; Anhui University; University of Derby (Springer, 2017-07-17)
      In wireless sensor networks, data aggregation allows in-network processing, which leads to reduced packet transmissions and reduced redundancy, and thus is helpful to prolong the overall lifetime of wireless sensor networks. In current studies, Elliptic Curve ElGamal homomorphic encryption algorithm has been widely used to protect end-to-end data confidentiality. However, these works suffer from the expensive mapping function during decryption. If the aggregated results are huge, the base station has no way to gain the original data due to the hardness of the elliptic curve discrete logarithm problem. Therefore, these schemes are unsuitable for the large-scale WSNs. In this paper, we propose a secure energy-saving data aggregation scheme designed for the large-scale WSNs. We employ Okamoto-Uchiyama homomorphic encryption algorithm to protect end-to-end data confidentiality, use MAC to achieve in-network false data filtering, and utilize the homomorphic MAC algorithm to achieve end-to-end data integrity. Two popular IEEE 802.15.4-compliant wireless sensor network platforms, Tmote Sky and iMote 2 have been used to evaluate the efficiency and feasibility of our scheme. The results demonstrate that our scheme achieved better performance in reducing energy consumption. Moreover, system delay, especially decryption delay at the base station, has been reduced when compared to other state-of-art methods.
    • Data classification using the Dempster–Shafer method.

      Chen, Qi; Whitbrook, Amanda; Aickelin, Uwe; Roadknight, Chris; University of Nottingham (Taylor and Francis, 2014-02-26)
      In this paper, the Dempster–Shafer (D–S) method is used as the theoretical basis for creating data classification systems. Testing is carried out using three popular multiple attribute benchmark data-sets that have two, three and four classes. In each case, a subset of the available data is used for training to establish thresholds, limits or likelihoods of class membership for each attribute, and hence create mass functions that establish probability of class membership for each attribute of the test data. Classification of each data item is achieved by combination of these probabilities via Dempster’s rule of combination. Results for the first two data-sets show extremely high classification accuracy that is competitive with other popular methods. The third data-set is non-numerical and difficult to classify, but good results can be achieved provided the system and mass functions are designed carefully and the right attributes are chosen for combination. In all cases, the D–S method provides comparable performance to other more popular algorithms, but the overhead of generating accurate mass functions increases the complexity with the addition of new attributes. Overall, the results suggest that the D–S approach provides a suitable framework for the design of classification systems and that automating the mass function design and calculation would increase the viability of the algorithm for complex classification problems.
    • Data Driven Transmission Power Control for Wireless Sensor Networks

      Kotian, Roshan; Exarchakos, Georgios; Liotta, Antonio (Springer, 2015)
    • Data Intensive and Network Aware (DIANA) grid scheduling

      McClatchey, Richard; Anjum, Ashiq; Stockinger, Heinz; Ali, Arshad; Willers, Ian; Thomas, Michael; University of West England; Swiss Institute of Bioinformatics; National University of Sciences and Technology; CERN; et al. (Springer, 2007-01-27)
      In Grids scheduling decisions are often made on the basis of jobs being either data or computation intensive: in data intensive situations jobs may be pushed to the data and in computation intensive situations data may be pulled to the jobs. This kind of scheduling, in which there is no consideration of network characteristics, can lead to performance degradation in a Grid environment and may result in large processing queues and job execution delays due to site overloads. In this paper we describe a Data Intensive and Network Aware (DIANA) meta-scheduling approach, which takes into account data, processing power and network characteristics when making scheduling decisions across multiple sites. Through a practical implementation on a Grid testbed, we demonstrate that queue and execution times of data-intensive jobs can be significantly improved when we introduce our proposed DIANA scheduler. The basic scheduling decisions are dictated by a weighting factor for each potential target location which is a calculated function of network characteristics, processing cycles and data location and size. The job scheduler provides a global ranking of the computing resources and then selects an optimal one on the basis of this overall access and execution cost. The DIANA approach considers the Grid as a combination of active network elements and takes network characteristics as a first class criterion in the scheduling decision matrix along with computations and data. The scheduler can then make informed decisions by taking into account the changing state of the network, locality and size of the data and the pool of available processing cycles.
    • Data Mining for Monitoring and Managing Systems and Networks

      Liotta, Antonio; Di Fatta, Giuseppe (Springer US, 2014)
    • Data-driven knowledge acquisition, validation, and transformation into HL7 Arden Syntax

      Hussain, Maqbool; Afzal, Muhammad; Ali, Taqdir; Ali, Rahman; Khan, Wajahat Ali; Jamshed, Arif; Lee, Sungyoung; Kang, Byeong Ho; Latif, Khalid; Kyung Hee University, Seocheon-dong, Giheung-gu, Yongin-si 446-701, Gyeonggi-do, Republic of Korea; et al. (Elsevier BV, 2015-10-28)
      The objective of this study is to help a team of physicians and knowledge engineers acquire clinical knowledge from existing practices datasets for treatment of head and neck cancer, to validate the knowledge against published guidelines, to create refined rules, and to incorporate these rules into clinical workflow for clinical decision support. A team of physicians (clinical domain experts) and knowledge engineers adapt an approach for modeling existing treatment practices into final executable clinical models. For initial work, the oral cavity is selected as the candidate target area for the creation of rules covering a treatment plan for cancer. The final executable model is presented in HL7 Arden Syntax, which helps the clinical knowledge be shared among organizations. We use a data-driven knowledge acquisition approach based on analysis of real patient datasets to generate a predictive model (PM). The PM is converted into a refined-clinical knowledge model (R-CKM), which follows a rigorous validation process. The validation process uses a clinical knowledge model (CKM), which provides the basis for defining underlying validation criteria. The R-CKM is converted into a set of medical logic modules (MLMs) and is evaluated using real patient data from a hospital information system. We selected the oral cavity as the intended site for derivation of all related clinical rules for possible associated treatment plans. A team of physicians analyzed the National Comprehensive Cancer Network (NCCN) guidelines for the oral cavity and created a common CKM. Among the decision tree algorithms, chi-squared automatic interaction detection (CHAID) was applied to a refined dataset of 1229 patients to generate the PM. The PM was tested on a disjoint dataset of 739 patients, which gives 59.0% accuracy. Using a rigorous validation process, the R-CKM was created from the PM as the final model, after conforming to the CKM. The R-CKM was converted into four candidate MLMs, and was used to evaluate real data from 739 patients, yielding efficient performance with 53.0% accuracy. Data-driven knowledge acquisition and validation against published guidelines were used to help a team of physicians and knowledge engineers create executable clinical knowledge. The advantages of the R-CKM are twofold: it reflects real practices and conforms to standard guidelines, while providing optimal accuracy comparable to that of a PM. The proposed approach yields better insight into the steps of knowledge acquisition and enhances collaboration efforts of the team of physicians and knowledge engineers.
    • Deadline constrained video analysis via in-transit computational environments

      Zamani, Ali Reza; Zou, Mengsong; Diaz-Montes, Javier; Petri, Ioan; Rana, Omer; Anjum, Ashiq; Parashar, Manish; University of Derby (IEEE, 2017-01-16)
      Combining edge processing (at data capture site) with analysis carried out while data is enroute from the capture site to a data center offers a variety of different processing models. Such in-transit nodes include network data centers that have generally been used to support content distribution (providing support for data multicast and caching), but have recently started to offer user-defined programmability, through Software Defined Networks (SDN) capability, e.g. OpenFlow and Network Function Visualization (NFV). We demonstrate how this multi-site computational capability can be aggregated to support video analytics, with Quality of Service and cost constraints (e.g. latency-bound analysis). The use of SDN technology enables separation of the data path from the control path, enabling in-network processing capabilities to be supported as data is migrated across the network. We propose to leverage SDN capability to gain control over the data transport service with the purpose of dynamically establishing data routes such that we can opportunistically exploit the latent computational capabilities located along the network path. Using a number of scenarios, we demonstrate the benefits and limitations of this approach for video analysis, comparing this with the baseline scenario of undertaking all such analysis at a data center located at the core of the infrastructure.
    • Decentralized dynamic understanding of hidden relations in complex networks

      Mocanu, Decebal Constantin; Exarchakos, Georgios; Liotta, Antonio (Nature Publishing Group, 2017-12-21)
    • Deep learning for objective quality assessment of 3D images

      Mocanu, Decebal Constantin; Exarchakos, Georgios; Liotta, Antonio (IEEE, 2014)
    • Deep learning for quality assessment in live video streaming

      Vega, Maria Torres; Mocanu, Decebal Constantin; Famaey, Jeroen; Stavrou, Stavros; Liotta, Antonio (IEEE, 2017)
    • Deep learning hyper-parameter optimization for video analytics in clouds.

      Yaseen, Muhammad Usman; Anjum, Ashiq; Rana, Omer; Antonopoulos, Nikolaos; University of Derby; Cardiff University (IEEE, 2018-06-15)
      A system to perform video analytics is proposed using a dynamically tuned convolutional network. Videos are fetched from cloud storage, preprocessed, and a model for supporting classification is developed on these video streams using cloud-based infrastructure. A key focus in this paper is on tuning hyper-parameters associated with the deep learning algorithm used to construct the model. We further propose an automatic video object classification pipeline to validate the system. The mathematical model used to support hyper-parameter tuning improves performance of the proposed pipeline, and outcomes of various parameters on system's performance is compared. Subsequently, the parameters that contribute toward the most optimal performance are selected for the video object classification pipeline. Our experiment-based validation reveals an accuracy and precision of 97% and 96%, respectively. The system proved to be scalable, robust, and customizable for a variety of different applications.
    • A deep-learning-based emergency alert system

      Kang, Byungseok; Choo, Hyunseung; Sungkyunkwan University (Elsevier, 2016-05-21)
      Emergency alert systems serve as a critical link in the chain of crisis communication, and they are essential to minimize loss during emergencies. Acts of terrorism and violence, chemical spills, amber alerts, nuclear facility problems, weather-related emergencies, flu pandemics, and other emergencies all require those responsible such as government officials, building managers, and university administrators to be able to quickly and reliably distribute emergency information to the public. This paper presents our design of a deep-learning-based emergency warning system. The proposed system is considered suitable for application in existing infrastructure such as closed-circuit television and other monitoring devices. The experimental results show that in most cases, our system immediately detects emergencies such as car accidents and natural disasters.
    • Deepeddy: A simple deep architecture for mesoscale oceanic eddy detection in sar images

      Huang, Dongmei; Du, Yanling; He, Qi; Song, Wei; Liotta, Antonio (IEEE, 2017)
    • Defining true propagation patterns of underwater noise produced by stationary vessels

      Lusted-Kosolwski, Claire; Piercy, Julius J. B.; Hill, Adam J.; University of Derby; Department for Environment, Food and Rural Affairs (Acoustical Society of America, 2016-07-10)
      The study of underwater vessel noise over the past sixty years has predominantly focused upon the increase in ambient noise caused by the propulsion mechanisms of large commercial vessels. Studies have identified that the continuous rise of ambient noise levels in open waters is linked to the increase in size and strength of anthropogenic sound sources. Few studies have investigated the noise contribution of smaller vessels or ambient noise levels present in coastal and in-shore waters. This study aimed to identify the level of noise common to non-commercial harbors by studying the noise emissions of a diesel generator on board a 70m long sailing vessel. Propagation patterns revealed an unconventional shape (specific to the precise location of the noise source on board the vessel), unlike those of standard geometric spreading models, as typically assumed when predicting vessel noise emission. Harbor attributes (including water depth, ground sediment and structural material components) caused for altered level and frequency characteristics of the recorded underwater noise, and were correlated to the sound measurements made. The measurements (taken in eight harbors around Northern Europe) were statistically analyzed to identify the primary factors influencing near-field sound propagation around a stationary vessel.
    • Deployment of assisted living technology solution platform using smart body sensors for elderly people health monitoring.

      Elsaadi, Riyad; Mahmoud, Shafik; University of Derby (UNSYS digital, 2017-06-01)
      Many of the Ambient Assisted Living Technologies (AALT) available in the market to the end-users with long term health condition have no common inter-operational protocol. Each product has its own communication protocols, different interfaces and interoperation which limits their solution reliability, flexibility and efficiency. This paper presents assisted living platform solution for elderly people with long term health condition based on wireless sensors networking technology. The system includes multi feedback sensor arrangements for monitoring, such as: blood pressure, heart rate and body temperature. Each sensor has been integrated with the necessary near real time embedded and wireless protocols that allow data collection, transfer and interoperate in ad-hoc bases. The data will be communicated wirelessly to central data base system and shared though cloud network. The collected data will be processed and relevant intelligent algorithms will be deployed to ensure certain actions taken place when health condition warnings arise. These warnings to be communicated to relevant carer, General Practitioner (GP) and health authority to take the necessary action and steps to handle such end user health condition warnings. The proposed solution system will provide the flexibility to analyse most of the health conditions based on near real time monitoring technology. It will enable the population of elderly with long term health condition to manage their daily life activities within multiple environments i.e. from their comfort home, care centres and hospitals. The data and information will be treated with high confidentiality to ensure end-users integrity and dignity have been maintained.
    • Deployment of assisted living technology using intelligent body sensors platform for elderly people health monitoring

      Elsaadi, Riyad; Shafik, Mahmoud; University of Derby (IOS Press, 2016-09)
      Many of the Ambient Assisted Living Technologies (AALT) available to the end-users as off- shelf products have no common inter-operational protocol (Language). Each product has its own communication protocols, different interfaces and interoperation which limits their solution efficiency for long term health condition monitoring systems. This paper presents assisted living technology (ALT) solution for elderly people based on wireless sensors networking technology. The system includes biofeedback monitoring body sensors, such as: blood pressure, heart rate and body temperature. Each sensor has been integrated with the necessary real time embedded protocol and system to work in ad-hoc bases. The data will be send wirelessly and shared though cloud network. The collected data will be processed and relevant algorithms will be deployed to take certain actions when any changes occur or health warnings arise. These will be treated with high confidentiality to ensure end-users integrity and dignity have been maintained. The proposed solution system will provide the flexibility to analyse most of the health conditions based on near real time monitoring technology. It will also enable the population of elderly to manage their daily life activities within multiple environments i.e. from their comfort home, care centers and hospitals.
    • Deriving a global and hourly data set of aerosol optical depth over land using data from four geostationary satellites: goes-16, msg-1, msg-4, and himawari-8

      Xie, Yanqing; Xue, Yong; Guang, Jie; Mei, Linlu; She, Lu; Li, Ying; Che, Yahui; Fan, Cheng; China University of Mining and Technology, XuzhouChina; State Key Laboratory of Remote Sensing Science; et al. (IEEE, 2019-11-07)
      Due to the limitations in the number of satellites and the swath width of satellites (determined by the field of view and height of satellites), it is impossible to monitor global aerosol distribution using polar orbiting satellites at a high frequency. This limits the applicability of aerosol optical depth (AOD) data sets in many fields, such as atmospheric pollutant monitoring and climate change research, where a high-temporal data resolution may be required. Although geostationary satellites have a high-temporal resolution and an extensive observation range, three or more satellites are required to achieve global monitoring of aerosols. In this article, we obtain an hourly and global AOD data set by integrating AOD data sets from four geostationary weather satellites [Geostationary Operational Environmental Satellite (GOES-16), Meteosat Second Generation (MSG-1), MSG-4, and Himawari-8]. The integrated data set will expand the application range beyond the four individual AOD data sets. The integrated geostationary satellite AOD data sets from April to August 2018 were validated using Aerosol Robotic Network (AERONET) data. The data set results were validated against: the mean absolute error, mean bias error, relative mean bias, and root-mean-square error, and values obtained were 0.07, 0.01, 1.08, and 0.11, respectively. The ratio of the error of satellite retrieval within ±(0.05 + 0.2 x AODAERONET) is 0.69. The spatial coverage and accuracy of the MODIS/C61/AOD product released by NASA were also analyzed as a representative of polar orbit satellites. The analysis results show that the integrated AOD data set has similar accuracy to that of the MODIS/AOD data set and has higher temporal resolution and spatial coverage than the MODIS/AOD data set.