Javatpoint Logo
Javatpoint Logo

Edge Computing Project Ideas List Part- 1

What is Edge Computing?

To move the burden closer to the location where data is produced and to enable actions to be performed in response to an evaluation of that data, edge computing technologies make use of computer power that is accessible outside of conventional and cloud data centers. Developers may make the following applications by utilizing and controlling the computational power that is accessible on remote sites, such as factories, retail outlets, warehouses, hotels, distribution centers, or vehicles:

  • Significantly lower network bandwidth requirements
  • increase the privacy of sensitive data
  • enable operation even when channels are down.

Some of the essential elements that make up the edge ecosystem include:

  • Cloud
    A repository for container-based workloads like apps and machine learning models may be found in this public or private cloud. Additionally, these clouds house and operate the software necessary to coordinate and control the many edge nodes. Workloads on these clouds will communicate with workloads at the edge, including local and device workloads. Any data the other nodes need can also be an origin or destination in the cloud.
  • Edge device
    An edge node is a piece of equipment with a specific use that also has built-in processing power. Interesting work may be done on edge devices like a factory floor assembly line, ATMs, smart cameras, or a car. An edge device often has constrained processing resources, frequently motivated by economic concerns. ARM or x86 class Processors with 1 or 4 cores, 256 Megabytes of memory, and maybe a gigabyte of local storage devices are frequently seen in edge devices. Even while edge devices have the potential to be more sophisticated, they are now the exception instead of the rule.
  • Edge node
    Any edge gadget, edge client, or edge router used for edge computing is called an "edge node" in this context.
  • Edge cluster/server
    A broad sense IT computer known as an edge cluster or server is situated at a remote operations facility, such as a factory, retail establishment, hotel, distribution center, or bank. A conventional industrial PC or stacked computer form factor is used to build an edge cluster/server. Edge servers with 8, 16, or even more CPU cores, 16GB of memory, and tens of Gigabits of local storage are popular. Business application loads and support functions are generally executed on an edge cluster or server.
  • Edge gateway
    An edge cluster or server that may host business application operations and shared services and provide network tasks, including protocol conversion, network termination, tunneling, firewalls, or wireless connectivity, is known as an edge gateway. Even though certain edge devices can host network activities or act as a small gateway, edge gateways are typically independent of edge devices.

IoT sensors are fixed-function devices that broadcast and gather data to an edge or cloud but lack onboard processing power, memory, and storage. Due to their fixed-function nature, these edge devices do not appear in Figure 1 despite connecting to several nodes. For this reason, containers cannot be launched on them.

After gaining a fundamental knowledge of edge devices, let's briefly explore 5G and how it will affect edge computing before going into the advantages and difficulties of edge computing.

Edge Computing Projects Ideas List

Computation- Service and Data Re-Scheduling in Edge Computing

Description of the project:

For complex requests in edge networks, the collaboration of the Internet of Things looks promising these days. IoT services typically represent the capabilities of IoT devices in this context. IoT services that are data- or computation-intensive can fulfill a request by either requiring a significant amount of computational power or consuming a significant amount of sensory data. It is difficult to locate IoT services that are functionally equivalent to one another while still adhering to the predetermined spatial constraints imposed by those services.

This is because some IoT services may not exist when IoT services are currently being deployed. An energy-aware Data- and Computation-intensive service Migration and Scheduling mechanism to reschedule particular services from their hosting devices to those within the geographical region specified by the request is what we propose as a solution to this problem. Extensive testing and evaluation show that, in comparison to the most recent methods, our DCMS is promising when it comes to lowering average delay and energy consumption.

Deep learning-based Job scheduling in IoT edge network

Description of the project:

Edge computing (EC), which enables asset Internet of Things (IoT) applications to use low-delay services at the network edge, has recently become a viable paradigm. To maximize the long-term task satisfaction degree, numerous tasks are scheduled to virtual machines (VMs) installed at the edge server (LTSD). However, scheduling application tasks is extremely difficult due to the edge server's constrained processing resources. This work uses the EC scenario to study a task scheduling problem. The state, action, government transition, and reward are constructed for the problem as a Markov decision procedure (MDP).

Given the variety of tasks and the heterogeneity of the available resources, we use deep reinforcement learning (DRL) to handle the problems of time scheduling (i.e., the order in which tasks are executed) and allocating resources (i.e., which VM the task is allocated to). The task scheduling problem is addressed using a wholly neural network (FCN), and a policy-based REINFORCE solution is suggested. According to simulation results, the proposed DRL-based task scheduling surpasses the prevailing approaches in the literature regarding the average task fulfillment degree and success ratio.

A deep reinforcement learning Job offloading framework for edge network

Description of the project:

Resource-constrained edge devices cannot successfully meet the requirements of Internet of Things (IoT) applications and Convolutional Neural Network (DNN) computing due to the fast growth of mobile data and the unparalleled demand for computing power. Edge offloading, a distributed computing paradigm, can overcome the resource limitations of IoT devices, lighten the strain on the computer, and increase the effectiveness of task processing by moving complicated jobs from Connected devices to edge-cloud servers.

However, because the topic of optimal overloading decision-making is NP-hard, it is challenging to produce results using conventional optimization techniques efficiently. Additionally, there are still significant issues with current deep learning techniques, such as their slow learning rate and poor capacity for adapting to new contexts. We suggest a Deep Meta Reinforcement Having to learn Offloading (DMRO) technique to address these issues, which combines several concurrent DNNs with Q-learning to produce precise offloading judgments.

The best offloading strategy from a changing situation can be swiftly and flexibly obtained by combining the perceptual capacity of deep learning, the decision-making power of reinforcement learning, and the fast environment learning ability of meta-learning. Through several simulation studies, we assess the efficacy of DMRO and find that, compared to conventional Deep Reinforcement Learning (DRL) methods, DMRO's offloading impact may be enhanced by 17.6%. The model can quickly adapt to a new MEC work environment and has excellent portability while making real-time offloading decisions.

Power-Efficient Edge networking under Delay Constraints

Description of the project:

One of the networks' main objectives beyond 5G is green communication. However, the conflict between task delay requirements and energy saving gets more pronounced on the device side as more and more delay-sensitive applications are developed. This research concentrates on the edge of the network computing system with restricted local and edge computing capabilities, which may result in task rejection owing to latency violation.

First, a problem for minimizing energy usage is formulated for both partial and binary offloading modes. The job assignment and resource allocation underneath those two modes are jointly optimized using a low-complexity heuristic approach and a Lagrange dual scheme, respectively. In contrast to other baseline schemes, in-depth simulations are used to verify the effectiveness of the suggested schemes. In particular, completing a given task priority model is created to efficiently lower the number of abandoned tasks and enhance the MEC server's service performance.

Offloading energy-conscious inference for DNN-driven applications on mobile edge clouds

Description of the project:

Deep Neural Networking (DNNs) have been successfully applied in several application sectors with a growing focus on Machine Intelligence (AI) applications. Executing a learned DNN model requires a sizable amount of processing power due to the continually increasing layer thickness and neurons in DNNs. Modern GPU-equipped large-scale data centers can meet the DNNs' ever-increasing resource demands.

However, when mobile edge computing and 5G technologies become more widely available, they open up new opportunities for DNN-driven AI applications, particularly when those apps use dispersed data sets across several places. Adopting inference or running a pre-trained DNN based on freshly generated image and video input from devices is one of the key processes of an Algorithms app in mobile edge clouds.

To accommodate as many DNN inference requests as possible, we study offloading DNN inference queries in a 5 g mobile edge cloud (MEC) technology. We offer precise and local solutions to the MECs' inference offloading problem. We also take dynamic task offloading into account for inference requests and provide a real-time computation that can be changed. A real-world test-bed implementation and extensive simulations assess the proposed methods. The experimental findings show that the suggested algorithms exceed their theoretical equivalents and other comparable heuristics described in the literature.

Hybrid Workflow rescheduling on Edge Cloud networks

Description of the project:

Applications for the Internet of Things can be thought of as processes that mix stream and batch processing to achieve data analytics goals in various application domains, including smart homes, healthcare, bioinformatics, astronomy, and education. The biggest difficulty with this combination is differentiating between bulk and stream computations in terms of service quality limitations.

While comes with the following is more likely to be resource-intensive, stream processing is extremely latency-sensitive. In this study, we provide a two-stage framework for end-to-end hybrid scheduling problems on an edge cloud system. Gradient Descent Search (GDS), a linear optimization method, is the foundation of the resource estimation methodology we propose in the first stage. In the second stage, we provide a cluster-based procurement and planning technique for hybrid processes on diverse edge cloud resources. Under deadline and throughput restrictions, we offer a multi-objective modeling approach for execution time and monetary cost.

Results show that the framework can efficiently adjust several parameters, such as stream arrival rate, computational throughput, and workflow complexity, to regulate the execution of hybrid workflows. The suggested scheduler offers a significant improvement for large-scale hybrid workflows in terms of execution time and cost, with an average of 8% and 35%, compared to a meta-heuristics technique employing Particle Swarm Optimization (PSO).

An advanced butterfly optimization technique for data placement and rescheduling in edge computing settings

Description of the project:

An intriguing technology called mobile edge computing (MEC) aims to provide diverse power and memory resources at the edge of mobile devices (MDs). MECs do, however, have a limited amount of resources. Thus it is important to manage them effectively to avoid resource waste. A workflow scheduling procedure aims to match activities with the best possible resources depending on some goals.

The Levy flight method is used in this study to present DBOA, a discrete variant of the Butterfly Optimization Technique (BOT) that accelerates convergence and guards against local optima issues. To determine the order of task execution in the scientific workflows, we also used a task prioritizing technique. Then, in MEC contexts, we use DVFS-based data-intensive process planning and data placement or DBOA for Dynamical Voltage and Frequency Scaling.

Extensive simulations are performed on numerous well-known scientific workflows of varying sizes to assess the effectiveness of the proposed scheduling method. The results of the experiments show that our approach can perform better than other algorithms in terms of power consumption, shared data overheads, and other factors.

Resource allocation and online computation offloading in mobile-edge computing

Description of the project:

Many computation-intensive applications, such as augmented reality and interactive gaming, have emerged due to the proliferation of mobile smart devices. To meet the low-latency requirements of the applications, mobile-edge computing (EC) is proposed as an extension of cloud computing. This article's subject is an EC system built in a dense network with many base stations. Successively generated heterogeneous computation tasks are created on a smart device moving through the network.

The device user wants the best task offloading strategy, CPU frequency, and transmit power scheduling to reduce task completion latency and energy consumption over time. However, the problem is particularly challenging because of the dynamic network conditions and stochastic task generation. We transform the issue into a Markov decision process, drawing on reinforcement learning as inspiration. Then, we suggest an attention-based double deep Q network (DDQN) strategy that uses two neural networks to estimate the total energy and latency rewards that each action generates.

Additionally, a context-aware attention mechanism is intended to assign weights to each action's values adaptively. In addition, we carry out extensive simulations to evaluate how well our suggested strategy stacks up against several DDQN-based and heuristic baselines.

A Self-Adaptive Job Scheduling for Network Edge Computing

Description of the project:

Mobile edge computing is an emerging paradigm that supports low-latency applications in resource-constrained environments, such as the Internet of Things and vehicular networks. MEC makes it possible to quickly respond to or intervene in response to massive amounts of data and service requests generated by mobile end users or IoT devices. Notwithstanding, the PCs framing a MEC framework normally have restricted processing assets, which numerous undertakings and numerous synchronous help demands should share.

A difficult problem in a MEC system is scheduling and dispatching computational tasks from end users, especially for applications sensitive to latency. To provide resource-efficient low-latency service responses, we propose a self-adaptive task scheduling and dispatching strategy in this paper. Using reinforcement learning, the proposed method solves the scheduling problem by prioritizing computational tasks according to their attributes. Simulation and a small-scale case study on a MEC test bed confirm the proposed scheme's viability and efficiency, demonstrating that it is both effective and efficient.

RL-based job offloading for edge-enabled sensor networks in smart healthcare

Description of the project:

The widespread use of sensor nodes or Internet-of-Medical-Things (IoMT) devices is outfitted with sensors. Huge amounts of data from numerous smart healthcare applications are gathered using these networked sensors, and the resulting data is used to help in decision-making. An effective tool for gathering sensor data is edge computing, which offers computational resources.

In the interim, attention has turned to the intelligent and precise resource management provided by Artificial Intelligence (AI), particularly in healthcare systems. IoMT-based healthcare devices' computational speed and range will be significantly improved with the aid of AI. However, the problem with these energy-hungry, battery-short, and delay-intolerant portable devices is that they use outdated and ineffective traditional patterns of fair resource allocation.

This study suggests the Computation Offloading with Reinforcement Learning (CORL) scheme to reduce latency and energy consumption. The lack of restricted battery and service latency deadline limitations are satisfied by formulating the issue as a combined delay and energy cost reduction challenge.

Additionally, the suggested method searches for the best node with available resources to offload tasks to balance latency and energy consumption. To validate the suggested system under reasonable assumptions, we are employing an iFogSim simulator. The experimental results demonstrate the suggested scheme's advantages in energy conservation, reduced latency, and optimal node resource usage in edge-enabled iot systems.

A simulation-based optimization strategy for the design of reliability-conscious services in edge computing

Description of the project:

Edge computing is a cutting-edge architecture developed in response to the rise of the Internet of Things (IoT). It is used to improve the performance and security of standard cloud computing systems. Thanks to the services computing methodology, edge computing systems can adapt to application requirements with greater agility and flexibility. One of the most significant issues in services computing, service composition, faces various new difficulties in large-scale edge cloud computing, including complicated protocol stack, failures and recoveries that occur continuously, and an explosion in the search space.

In this research, we seek to solve these issues by developing a simulation-based optimization strategy for serviceability service composition. Compound stochastic Petri net strategies are introduced to formulate the kinetics of inter-edge computing systems, and their related quantitative analysis is carried out. The time scale decomposition technique is used to increase the effectiveness of the model solution to address the state inflation problem in complex networks or complicated service processes.

To considerably shrink the search space, the cardinal optimization technique is introduced together with simulation schemes for performance analysis and improvement. Finally, we run experiments based on actual data, and the findings confirm the method's effectiveness.

An ant-mating optimization model for job scheduling in a fog computing environment

Description of the project:

Fog computing is the platform for developing upcoming applications with low bandwidth needs. The primary problem with fog computing is how to efficiently use its resources for carrying out delay-sensitive operations because the devices used frequently have limited resources and are widely dispersed. To solve this issue, we suggest and test a new job scheduling method to decrease the energy and overall system makespan for fog computing platforms.

Ant Mating Optimization (AMO), a novel bio-inspired optimization technique, and an optimum distribution of jobs across nearby fog nodes are the two main parts of the suggested strategy. The goal is to determine the best possible trade-off between the system's lifespan and the energy consumption needed by fog cloud computing, as determined by end-user devices. According to our empirical performance evaluation results, the proposed approach beats the bee life algorithm, conventional particle swarm optimization, and genetic algorithm regarding makespan and energy consumed.

Offloading autonomous computing for IoT applications on the mobile edge

Description of the project:

By offloading computation, the mobile asset devices can complete the operation, which requires high computing capabilities. The mobile cloud is a well-known offloading platform that often uses far-end network solutions to use the computing capabilities of mobile devices with limited resources. The user devices incur higher latency or network delay due to the far-end network solution, negatively impacting the real-time wireless Internet of things (IoT) applications. The near-end network method for offloading computing in mobile edge/fog is therefore proposed in this paper.

Compute offloading in mobile edge/fog is complicated by mobile devices' mobility, heterogeneity, and geographic spread. However, a deep Q-learning-based parasympathetic management system is suggested to deal with the computing resource demand from many mobile devices. To provide edge/fog computing service, the distributed edge/fog network controller (FNC) scavenges the edge/fog resources, such as processor, memory, and network.

The problem is suited for modeling by the Markov decision process (MDP) and solution by reinforcement learning due to the randomness in resource availability and the wide range of possibilities for assigning those resources for offloading computation. The suggested model is simulated using MATLAB while considering fluctuating resource needs and end-user device mobility.

By reducing the latency of service computing, the suggested autonomic deep Q-learning-based technique considerably boosts the performance of computing offloading. For comparative study purposes, the total power consumption resulting from various offloading decisions is also investigated, demonstrating the suggested approach's energy efficiency compared to cutting-edge computing offloading alternatives.


Next TopicNumPy Operations





Youtube For Videos Join Our Youtube Channel: Join Now

Feedback


Help Others, Please Share

facebook twitter pinterest

Learn Latest Tutorials


Preparation


Trending Technologies


B.Tech / MCA