Now showing items 1-20 of 694

• Graph and Network Theory for the Analysis of Criminal Networks

Social Network Analysis is the use of Network and Graph Theory to study social phenomena, which was found to be highly relevant in areas like Criminology. This chapter provides an overview of key methods and tools that may be used for the analysis of criminal networks, which are presented in a real-world case study. Starting from available juridical acts, we have extracted data on the interactions among suspects within two Sicilian Mafia clans, obtaining two weighted undirected graphs. Then, we have investigated the roles of these weights on the criminal networks properties, focusing on two key features: weight distribution and shortest path length. We also present an experiment that aims to construct an artificial network which mirrors criminal behaviours. To this end, we have conducted a comparative degree distribution analysis between the real criminal networks, using some of the most popular artificial network models: Watts-Strogats, Erdős-Rényi, and Barabási-Albert, with some topology variations. This chapter will be a valuable tool for researchers who wish to employ social network analysis within their own area of interest.
• An LMI Approach-Based Mathematical Model to Control Aedes aegypti Mosquitoes Population via Biological Control

In this paper, a novel age-structured delayed mathematical model to control Aedes aegypti mosquitoes via Wolbachia-infected mosquitoes is introduced. To eliminate the deadly mosquito-borne diseases such as dengue, chikungunya, yellow fever, and Zika virus, the Wolbachia infection is introduced into the wild mosquito population at every stage. This method is one of the promising biological control strategies. To predict the optimal amount of Wolbachia release, the time varying delay is considered. Firstly, the positiveness of the solution and existence of both Wolbachia present and Wolbachia free equilibrium were discussed. Through linearization, construction of suitable Lyapunov–Krasovskii functional, and linear matrix inequality theory (LMI), the exponential stability is also analyzed. Finally, the simulation results are presented for the real-world data collected from the existing literature to show the effectiveness of the proposed model.
• Botnet detection used fast-flux technique, based on adaptive dynamic evolving spiking neural network algorithm

A botnet refers to a group of machines. These machines are controlled distantly by a specific attacker. It represents a threat facing the web and data security. Fast-flux service network (FFSN) has been engaged by bot herders for cover malicious botnet activities. It has been engaged by bot herders for increasing the lifetime of malicious servers through changing the IP addresses of the domain name quickly. In the present research, we aimed to propose a new system. This system is named fast flux botnet catcher system (FFBCS). This system can detect FF-domains in an online mode using an adaptive dynamic evolving spiking neural network algorithm. Comparing with two other related approaches the proposed system shows a high level of detection accuracy, low false positive and negative rates, respectively. It shows a high performance. The algorithm's proposed adaptation increased the accuracy of the detection. For instance, this accuracy reached (98.76%) approximately.
• Severity Estimation of Plant Leaf Diseases Using Segmentation Method

Plants have assumed a significant role in the history of humankind, for the most part as a source of nourishment for human and animals. However, plants typically powerless to different sort of diseases such as leaf blight, gray spot and rust. It will cause a great loss to farmers and ranchers. Therefore, an appropriate method to estimate the severity of diseases in plant leaf is needed to overcome the problem. This paper presents the fusions of the Fuzzy C-Means segmentation method with four different colour spaces namely RGB, HSV, L*a*b and YCbCr to estimate plant leaf disease severity. The percentage of performance of proposed algorithms are recorded and compared with the previous method which are K-Means and Otsu’s thresholding. The best severity estimation algorithm and colour space used to estimate the diseases severity of plant leaf is the combination of Fuzzy C-Means and YCbCr color space. The average performance of Fuzzy C-Means is 91.08% while the average performance of YCbCr is 83.74%. Combination of Fuzzy C-Means and YCbCr produce 96.81% accuracy. This algorithm is more effective than other algorithms in terms of not only better segmentation performance but also low time complexity that is 34.75s in average with 0.2697s standard deviation.
• Controlling Wolbachia transmission and invasion dynamics among aedes aegypti population via impulsive control strategy

This work is devoted to analyzing an impulsive control synthesis to maintain the self-sustainability of Wolbachia among Aedes Aegypti mosquitoes. The present paper provides a fractional order Wolbachia invasive model. Through fixed point theory, this work derives the existence and uniqueness results for the proposed model. Also, we performed a global Mittag-Leffler stability analysis via Linear Matrix Inequality theory and Lyapunov theory. As a result of this controller synthesis, the sustainability of Wolbachia is preserved and non-Wolbachia mosquitoes are eradicated. Finally, a numerical simulation is established for the published data to analyze the nature of the proposed Wolbachia invasive model.
• Targeted ensemble machine classification approach for supporting IOT enabled skin disease detection

The fast development of the Internet of Things (IoT) changes our life in many areas, especially in the health domain. For example, remote disease diagnosis can be achieved more efficiently with advanced IoT-technologies which not only include hardware but also smart IoT data processing and learning algorithms, e.g. image-based disease classification. In this paper, we work in a specific area of skin condition classification. This research work aims to provide an implementable solution for IoT-led remote skin disease diagnosis applications. The research output can be concluded into three folders. The first folder is about dynamic AI model configuration supported IoT-Fog-Cloud remote diagnosis architecture with hardware examples. The second folder is the evaluation survey regarding the performances of machine learning models for skin disease detection. The evaluation contains a variety of data processing methods and their aggregations. The evaluation takes account of both training-testing and cross-testing validations on all seven conditions and individual condition. In addition, the HAM10000 dataset is picked for the evaluation process according to the suitability comparisons to other relevant datasets. In the evaluation, we discuss the earlier work of ANN, SVM and KNN models, but the evaluation process mainly focuses on six widely applied Deep Learning models of VGG16, Inception, Xception, MobileNet, ResNet50 and DenseNet161. The result shows that each of the top four models for the major seven skin conditions has better performance for the specific condition than others. Based on the evaluation discovery, the last folder proposes a novel classification approach of the Targeted Ensemble Machine Classify Model (TEMCM) to enable dynamically combining a suitable model in a two-phase detection process. The final evaluation result shows the proposed model can archive better performance.
• Application of caputo–fabrizio operator to suppress the aedes aegypti mosquitoes via wolbachia: an LMI approach

The aim of this paper is to establish the stability results based on the approach of Linear Matrix Inequality (LMI) for the addressed mathematical model using Caputo–Fabrizio operator (CF operator). Firstly, we extend some existing results of Caputo fractional derivative in the literature to a new fractional order operator without using singular kernel which was introduced by Caputo and Fabrizio. Secondly, we have created a mathematical model to increase Cytoplasmic Incompatibility (CI) in Aedes Aegypti mosquitoes by releasing Wolbachia infected mosquitoes. By this, we can suppress the population density of A.Aegypti mosquitoes and can control most common mosquito-borne diseases such as Dengue, Zika fever, Chikungunya, Yellow fever and so on. Our main aim in this paper is to examine the behaviours of Caputo–Fabrizio operator over the logistic growth equation of a population system then, prove the existence and uniqueness of the solution for the considered mathematical model using CF operator. Also, we check the alpha-exponential stability results for the system via linear matrix inequality technique. Finally a numerical example is provided to check the behaviour of the CF operator on the population system by incorporating the real world data available in the known literature.
• Pseudoprimality related to the generalized Lucas sequences

Some arithmetic properties and new pseudoprimality results concerning generalized Lucas sequences are presented. The findings are connected to the classical Fibonacci, Lucas, Pell, and Pell–Lucas pseudoprimality. During the process new integer sequences are found and some conjectures are formulated.
• Dielectron production in proton-proton and proton-lead collisions at √sNN = 5.02TeV

The first measurements of dielectron production at midrapidity (|η_e| < 0.8) in proton–proton and proton–lead collisions at √sNN = 5.02 TeV at the LHC are presented. The dielectron cross section is measured with the ALICE detector as a function of the invariant mass m_ee and the pair transverse momentum p_T,ee in the ranges m_ee < 3.5 GeV/c^2 and p_T,ee < 8 GeV/c, in both collision systems. In proton–proton collisions, the charm and beauty cross sections are determined at midrapidity from a fit to the data with two different event generators. This complements the existing dielectron measurements performed at √s = 7 and 13 TeV. The slope of the √s dependence of the three measurements is described by FONLL calculations. The dielectron cross section measured in proton–lead collisions is in agreement, within the current precision, with the expected dielectron production without any nuclear matter effects for e+e− pairs from open heavy-flavor hadron decays. For the first time at LHC energies, the dielectron production in proton–lead and proton–proton collisions are directly compared at the same √sNN via the dielectron nuclear modification factor RpPb. The measurements are compared to model calculations including cold nuclear matter effects, or additional sources of dielectrons from thermal radiation.
• Production of ω mesons in pp collisions at √s =7 TeV

The invariant differential cross section of inclusive ω(782) meson production at midrapidity (|y| < 0.5) in pp collisions at √s = 7 TeV was measured with the ALICE detector at the LHC over a transverse momentum range of 2 < pT < 17 GeV/c. The ω meson was reconstructed via its ω → π+π−π0 decay channel. The measured ω production cross section is compared to various calculations: PYTHIA 8.2 Monash 2013 describes the data, while PYTHIA 8.2 Tune 4C overestimates the data by about 50%. A recent NLO calculation, which includes a model describing the fragmentation of the whole vector-meson nonet, describes the data within uncertainties below 6 GeV/c, while it overestimates the data by up to 50% for higher pT. The ω/π0 ratio is in agreement with previous measurements at lower collision energies and the PYTHIA calculations. In addition, the measurement is compatible with transverse mass scaling within the measured pT range and the ratio is constant with C^(ω/π0) = 0.67±0.03 (stat) ±0.04 (sys) above a transverse momentum of 2.5 GeV/c.
• Centrality dependence of J/ψ and ψ(2S) production and nuclear modification in p-Pb collisions at √sNN = 8.16 TeV

The inclusive production of the J/ψ and ψ(2S) charmonium states is studied as a function of centrality in p-Pb collisions at a centre-of-mass energy per nucleon pair √sNN = 8.16 TeV at the LHC. The measurement is performed in the dimuon decay channel with the ALICE apparatus in the centre-of-mass rapidity intervals −4.46 < ycms < −2.96 (Pb-going direction) and 2.03 < ycms < 3.53 (p-going direction), down to zero transverse momentum (pT). The J/ψ and ψ(2S) production cross sections are evaluated as a function of the collision centrality, estimated through the energy deposited in the zero degree calorimeter located in the Pb-going direction. The pT-differential J/ψ production cross section is measured at backward and forward rapidity for several centrality classes, together with the corresponding average ⟨pT⟩ and ⟨pT^2⟩ values. The nuclear effects affecting the production of both charmonium states are studied using the nuclear modification factor. In the p-going direction, a suppression of the production of both charmonium states is observed, which seems to increase from peripheral to central collisions. In the Pb-going direction, however, the centrality dependence is different for the two states: the nuclear modification factor of the J/ψ increases from below unity in peripheral collisions to above unity in central collisions, while for the ψ(2S) it stays below or consistent with unity for all centralities with no significant centrality dependence. The results are compared with measurements in p-Pb collisions at √sNN = 5.02 TeV and no significant dependence on the energy of the collision is observed. Finally, the results are compared with theoretical models implementing various nuclear matter effects.
• Pion–kaon femtoscopy and the lifetime of the hadronic phase in Pb−Pb collisions at √sNN = 2.76 TeV

In this paper, the first femtoscopic analysis of pion–kaon correlations at the LHC is reported. The analysis was performed on the Pb–Pb collision data at √sNN = 2.76 TeV recorded with the ALICE detector. The non-identical particle correlations probe the spatio-temporal separation between sources of different particle species as well as the average source size of the emitting system. The sizes of the pion and kaon sources increase with centrality, and pions are emitted closer to the centre of the system and/or later than kaons. This is naturally expected in a system with strong radial flow and is qualitatively reproduced by hydrodynamic models. ALICE data on pion–kaon emission asymmetry are consistent with (3+1)-dimensional viscous hydrodynamics coupled to a statistical hadronisation model, resonance propagation, and decay code THERMINATOR 2 calculation, with an additional time delay between 1 and 2 fm/c for kaons. The delay can be interpreted as evidence for a significant hadronic rescattering phase in heavy-ion collisions at the LHC.
• Transverse-momentum and event-shape dependence of D-meson flow harmonics in Pb–Pb collisions at √sNN = 5.02 TeV

The elliptic and triangular flow coefficients v2 and v3 of prompt D0, D+, and D*+ mesons were measured at midrapidity (|y|<0.8) in Pb–Pb collisions at the centre-of-mass energy per nucleon pair of √sNN = 5.02 TeV with the ALICE detector at the LHC. The D mesons were reconstructed via their hadronic decays in the transverse momentum interval 1 <p_T < 36 GeV/c in central (0–10%) and semi-central (30–50%) collisions. Compared to pions, protons, and J/ψ mesons, the average D-meson v_n harmonics are compatible within uncertainties with a mass hierarchy for p_T ≤ 3 GeV/c, and are similar to those of charged pions for higher p_T. The coupling of the charm quark to the light quarks in the underlying medium is further investigated with the application of the event-shape engineering (ESE) technique to the D-meson v2 and p_T-differential yields. The D-meson v2 is correlated with average bulk elliptic flow in both central and semi-central collisions. Within the current precision, the ratios of per-event D-meson yields in the ESE-selected and unbiased samples are found to be compatible with unity. All the measurements are found to be reasonably well described by theoretical calculations including the effects of charm-quark transport and the recombination of charm quarks with light quarks in a hydrodynamically expanding medium.
• Search for a common baryon source in high-multiplicity pp collisions at the LHC

We report on the measurement of the size of the particle-emitting source from two-baryon correlations with ALICE in high-multiplicity pp collisions at √s = 13 TeV. The source radius is studied with low relative momentum p–p, pbar-pbar, p–Λ , and pbar-Λbar pairs as a function of the pair transverse mass m_T considering for the first time in a quantitative way the effect of strong resonance decays. After correcting for this effect, the radii extracted for pairs of different particle species agree. This indicates that protons, antiprotons, Λ s, and Λbar s originate from the same source. Within the measured m_T range (1.1–2.2) GeV/c^2 the invariant radius of this common source varies between 1.3 and 0.85 fm. These results provide a precise reference for studies of the strong hadron–hadron interactions and for the investigation of collective properties in small colliding systems.
• Blessing of dimensionality at the edge and geometry of few-shot learning

In this paper we present theory and algorithms enabling classes of Artificial Intelligence (AI) systems to continuously and incrementally improve with a priori quantifiable guarantees – or more specifically remove classification errors – over time. This is distinct from state-of-the-art machine learning, AI, and software approaches. The theory enables building few-shot AI correction algorithms and provides conditions justifying their successful application. Another feature of this approach is that, in the supervised setting, the computational complexity of training is linear in the number of training samples. At the time of classification, the computational complexity is bounded by few inner product calculations. Moreover, the implementation is shown to be very scalable. This makes it viable for deployment in applications where computational power and memory are limited, such as embedded environments. It enables the possibility for fast on-line optimisation using improved training samples. The approach is based on the concentration of measure effects and stochastic separation theorems and is illustrated with an example on the identification faulty processes in Computer Numerical Control (CNC) milling and with a case study on adaptive removal of false positives in an industrial video surveillance and analytics system.
• Bringing the Blessing of Dimensionality to the Edge

In this work we present a novel approach and algorithms for equipping Artificial Intelligence systems with capabilities to become better over time. A distinctive feature of the approach is that, in the supervised setting, the approaches' computational complexity is sub-linear in the number of training samples. This makes it particularly attractive in applications in which the computational power and memory are limited. The approach is based on the concentration of measure effects and stochastic separation theorems. The algorithms are illustrated with examples.
• Multiplicity dependence of J/ψ production at midrapidity in pp collisions at √s = 13 TeV

Measurements of the inclusive J/ψ yield as a function of charged-particle pseudorapidity density dNch/dη in pp collisions at √s = 13 TeV with ALICE at the LHC are reported. The J/ψ meson yield is measured at midrapidity (|y|<0.9) in the dielectron channel, for events selected based on the charged-particle multiplicity at midrapidity (|η|<1) and at forward rapidity ( -3.7 < η < -1.7 and 2.8 < η < 5.1); both observables are normalized to their corresponding averages in minimum bias events. The increase of the normalized J/ψ yield with normalized dNch/dη is significantly stronger than linear and dependent on the transverse momentum. The data are compared to theoretical predictions, which describe the observed trends well, albeit not always quantitatively.
• Control strategies of a gas turbine generator: a comparative study

Gas turbine generators are commonly used in oil and gas industries due to their robustness and association with other operating systems in the combined cycles. The electrical generators may become unstable under severe load fluctuations. For these raisons, maintaining the stability is paramount to ensure continuous functioninality.This paper deals with the modeling and simulation of a single shaft gas turbine generator using the model developed by Rowen and incorporating different types of controllers, viz a Zeigler- Nichols PID controller, a Fuzzy Logic Controller (FLC), FLC-PID and finally a hybridPID/FLC/FLC-PIDcontroller. The study was undertaken under Matlab / Simulink environment with data related to an in service power plant owned by Sonatrach, Algiers, Algeria. The results show that FLC-PID and hybrid tuned controllers provide the best time domain performances.
• Unveiling the strong interaction among hadrons at the LHC

One of the key challenges for nuclear physics today is to understand from first principles the effective interaction between hadrons with different quark content. First successes have been achieved using techniques that solve the dynamics of quarks and gluons on discrete space-time lattices. Experimentally, the dynamics of the strong interaction have been studied by scattering hadrons off each other. Such scattering experiments are difficult or impossible for unstable hadrons and so high-quality measurements exist only for hadrons containing up and down quarks. Here we demonstrate that measuring correlations in the momentum space between hadron pairs produced in ultrarelativistic proton–proton collisions at the CERN Large Hadron Collider (LHC) provides a precise method with which to obtain the missing information on the interaction dynamics between any pair of unstable hadrons. Specifically, we discuss the case of the interaction of baryons containing strange quarks (hyperons). We demonstrate how, using precision measurements of proton–omega baryon correlations, the effect of the strong interaction for this hadron–hadron pair can be studied with precision similar to, and compared with, predictions from lattice calculations. The large number of hyperons identified in proton–proton collisions at the LHC, together with accurate modelling of the small (approximately one femtometre) inter-particle distance and exact predictions for the correlation functions, enables a detailed determination of the short-range part of the nucleon-hyperon interaction.
• Energy-aware scheduling of streaming applications on edge-devices in IoT based healthcare

The reliance on Network-on-Chip (NoC) based Multiprocessor Systems-on-Chips (MPSoCs) is proliferating in modern embedded systems to satisfy the higher performance requirement of multimedia streaming applications. Task level coarse grained software pipeling also called re-timing when combined with Dynamic Voltage and Frequency Scaling (DVFS) has shown to be an effective approach in significantly reducing energy consumption of the multiprocessor systems at the expense of additional delay. In this paper we develop a novel energy-aware scheduler considering tasks with conditional constraints on Voltage Frequency Island (VFI) based heterogeneous NoC-MPSoCs deploying re-timing integrated with DVFS for real-time streaming applications. We propose a novel task level re-timing approach called R-CTG and integrate it with non linear programming based scheduling and voltage scaling approach referred to as ALI-EBAD. The R-CTG approach aims to minimize the latency caused by re-timing without compromising on energy-efficiency. Compared to R-DAG, the state-of-the-art approach designed for traditional Directed Acyclic Graph (DAG) based task graphs, R-CTG significantly reduces the re-timing latency because it only re-times tasks that free up the wasted slack. To validate our claims we performed experiments on using 12 real benchmarks, the results demonstrate that ALI-EBAD out performs CA-TMES-Search and CA-TMES-Quick task schedulers in terms of energy-efficiency.