Software Research Institute (Midlands)
https://research.thea.ie/handle/20.500.12065/2435
2024-03-28T18:27:25ZA policy language for context-aware access control in zero-trust network
https://research.thea.ie/handle/20.500.12065/4687
A policy language for context-aware access control in zero-trust network
Xiao, Shiyu
Evolving computing technologies such as cloud, edge computing, and the Internet of
Things (IoT) are creating a more complex, dispersed, and dynamic enterprise
operational environment. New security enterprise architectures such as those based on
the concept of Zero Trust (ZT) are emerging to meet the challenges posed by these
changes. Context awareness is a notion from the field of ubiquitous computing that is
used to capture and react to the situation of an entity, based on the dynamics of a
particular application or system context. However, there is limited research and
discussion about the overlap between context awareness and Zero Trust, with existing
literature often treating them as separate entities, leading to potential inefficiencies.
One of the main challenges in merging the two concepts is the inflexibility of the
programming languages and systems used in crafting access control policies, which
sometimes result in excessively rigid policies. Addressing this challenge could be
achieved through a new programming language specifically designed for greater
flexibility and a wider consideration of contextual factors, leading to more robust
security measures that align more effectively with the principles of Zero Trust.
This work conducts a systematic review of the previous research in context-aware
access control to identify the various ways to capture and express context across
different access control types and different application domains. Based on this review,
it identifies how context can help provide dynamic policy-based solutions for zero trust applications.
It extends a previous work which designed a policy language for risk-based access
control in zero-trust networks. Specifically, this project extends the necessary
language constructs to include and handle dynamic contextual attributes.
Finally, it provides a proof of concept to demonstrate that the extended language can
give the correct access decisions based on the evaluation of contextual information in
zero-trust network.
2023-06-01T00:00:00ZInformation centric networking based collaborative edge computing framework for the Internet of Things
https://research.thea.ie/handle/20.500.12065/4686
Information centric networking based collaborative edge computing framework for the Internet of Things
Wang, Qian
The Internet of Things (IoT) has connected billions of devices and its proliferation
will continue. As IoT grows, so do the volumes of data it produced and exchanged.
The challenge lies in efficiently processing the massive amounts of IoT data.
Moreover, IoT applications prioritize extracting meaningful knowledge rather than
building connections with multiple devices. This results in a mismatch between the
host-centric nature of the current Internet and the information-centric demands of IoT
applications.
To address these challenges, this thesis presents an Information Centric Networking
(ICN) based collaborative edge computing framework for distributed IoT data
processing. Firstly, the functional architecture is investigated to enable in-network
data processing in IoT edge environments. Within this architecture, three software
components, namely Computation Manager, Computation Executor and Function
Repository, collaborate to resolve, deploy and execute IoT jobs. This thesis leverages
the powerful and prevalent MapReduce paradigm in the architecture design. The
ICN-based implementation empowers MapReduce job execution by categorizing
Computation Executors as mappers and reducers, developing a distributed
computational job tree construction protocol for the Computation Manager, and
defining an ICN naming scheme for request expression and data/function acquisition.
The Function Repository is distributed and maintained by each Computation
Executor, which retrieves and saves functions by parsing users' requests.
Experimental simulations have verified the feasibility of the proposed design and
demonstrated its effectiveness in reducing network traffic.
Secondly, this thesis improves the proposed ICN-based computing framework by
considering the resource constraints of heterogenous edge devices. It classifies edge
devices into two types: processing-capable nodes (i.e. mappers and reducers) and
forwarding-only nodes (called forwarders). Both types of nodes join in the
computational job tree construction procedure. A job maintenance scheme is
developed to disseminate IoT jobs to appropriate devices and coordinate their
collaboration in serving multiple jobs simultaneously. Performance evaluation tests have confirmed the effectiveness of the proposed framework, indicating decreased
network traffic compared to the centralized data processing approach.
Thirdly, this thesis enhances the proposed framework to ensure exactly once data
computation. Interruptions in IoT network connections during edge collaboration can
lead to data loss or duplicated data transmission and processing, which is
unacceptable for IoT applications with exactly once computation requirement.
Although checkpoint-based schemes have been successfully developed in traditional
big data processing frameworks to achieve exactly once data delivery/processing, it
is challenging to directly apply these solutions in IoT scenarios due to the differences
between IoT networks and datacentre environments. This thesis identifies three
specific challenges of achieving exactly once computation in IoT collaborative edge
scenarios and devises a five-phase protocol to address them. The proposed protocol
consists of a job execution procedure for normal job operations and a job recovery
procedure to handle network failures. Simulation tests have shown that the proposed
design outperforms the checkpoint-based benchmark solution in terms of network
traffic and job execution time.
2023-08-01T00:00:00ZDeep reinforcement learning-based industrial robotic manipulation
https://research.thea.ie/handle/20.500.12065/4685
Deep reinforcement learning-based industrial robotic manipulation
Imtiaz, Muhammad Babar
Pick and place robotic systems can be found in all major industries in order to increase
throughput and efficiency. But most of the pick-and-place applications in the industry
today have been designed through hard-coded, static programming approaches. These
approaches completely lack the element of learning. This requires, in case of any
modification in the task or environment, reprogramming from scratch is required every
time. This thesis targets this particular area and introduces the learning ability in the
robotic pick-and-place operation which makes the operation more efficient, and
increases its strength of adaptability. We divide this thesis into three parts. In the first
part, we focus on learning and carrying out pick and place operations on various objects
moving on a conveyor belt in a non-visual environment i.e., without using vision
sensors, using proximity sensors. The problem under consideration is formulated as a
Markov Decision Process (MDP). and solved by using Reinforcement Learning (RL).
We train and test both model-free off-policy and on-policy RL algorithms in this
approach and perform their comparative analysis. In the second part, we develop a self learning deep reinforcement learning-based (DRL) framework for industrial pick-and place of regular and irregular-shaped objects tasks in a cluttered environment. We
design the MDP and solve it by deploying the model-free off-policy Q-learning
algorithm. We use the pixelwise-parameterization technique in the fully connected
network (FCN) being used as the Q-function approximator. In the third and main part,
we extend this vision-based self-supervised DRL-based framework to enable the robotic
arm to learn and perform prehensile (grasping) and non-prehensile (non-grasping,
sliding, pushing) manipulations together in sequential manner to improve the efficiency
and throughput of the pick-and-place task. We design the MDP and solve it by using
the Deep Q-networks. We consider three robotic manipulations from both prehensile and non-prehensile category and design large network of three FCNs without creating
any bottleneck situation. The pixel-wise parameterization technique is utilized for Q function approximation. We also present the performance comparisons among various
variants of the framework and very promising test results at varying clutter densities
across a range of complex scenario test cases.
2023-09-01T00:00:00ZDenoising of Nifti (MRI) images with a regularized neighborhood pixel similarity wavelet algorithm
https://research.thea.ie/handle/20.500.12065/4605
Denoising of Nifti (MRI) images with a regularized neighborhood pixel similarity wavelet algorithm
Akindele, Komoke Grace; Yu, Ming; Kanda, Paul Shekonya; Owoola, Eunice Oluwabunmi; Aribilola, Ifeoluwapo
The recovery of semantics from corrupted images is a significant challenge in image
processing. Noise can obscure features, interfere with accurate analysis, and bias results. To address
this issue, the Regularized Neighborhood Pixel Similarity Wavelet algorithm (PixSimWave) was
developed for denoising Nifti (magnetic resonance imaging (MRI)). The PixSimWave algorithm uses
regularized pixel similarity detection to improve the accuracy of noise reduction by creating patches
to analyze the intensity of pixels and locate matching pixels, as well as adaptive neighborhood
filtering to estimate noisy pixel values by allocating each pixel a weight based on its similarity. The
wavelet transform breaks down the image into scales and orientations, allowing a sparse image
representation to allocate a soft threshold on its similarity to the original pixels. The proposed method
was evaluated on simulated and raw T1w MRIs, outperforming other methods in terms of an SSIM
value of 0.9908 for a low Rician noise level of 3% and 0.9881 for a high noise level of 17%. The addition
of Gaussian noise improved PSNR and SSIM, with the results indicating that the proposed method
outperformed other models while preserving edges and textures. In summary, the PixSimWave
algorithm is a viable noise-elimination approach that employs both sparse wavelet coefficients and
regularized similarity with decreased computation time, improving the accuracy of noise reduction
in images.
2023-09-10T00:00:00Z