Skip to content

gedge-platform/gs-linkhq

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GS-LinkHQ

GS-LinkHQ is a reinforcement learning based agent that generates policy for both computation offloading and distributed caching in the GEdge-Platform. The policy supports horizontal (between cloud edge and neighbor cloud edge) and vertical (between cloud edge and core cloud) collaboration. form result policy_gen_workflow

The policy for computaion offloading

  • Providing offloading service developers (or service providers) with edge resource allocation policies and capabilities required to implement policies for offloading service development
    • Supporting offloading service provisioning: Optimized resource allocation
  • Resource allocation policy
    • Where to deploy
    • How to deploy
    • Collaborative deployment or Noncollaborative deployment
      • Horizontal, vertical and horizontal + vertical collaboration
  • Policy provide point
    • Service development point
    • When quality of service decreases → When scale out or scale up
    • Choosing load balancing method

The information considered to decide the policy is as follows.

  • Offloading service definition
    1. CPU specification
    2. GPU usage and specification
    3. Storage/cache specification
    4. Service properties (network delay sensitive, availability sensitive, etc.)
  • Cloud-edge (include neighbor edge's information) system resource status information
    1. CPU usage
    2. GPU usage
    3. Storage/cache usage
    4. Deployed offloading services quality

Components

GS-LinkHQ consists of three components

architecture

  • vedge: Develop proposed methods first with simulator
    • Collaborative multi-cluster environment
    • Include resource map, network performance map, and offloading service(task) definition
  • agent: Reinforcement learning agent developed with the above two projects
    • Real-world (GEdge-Platform) environment

Each componets communicates through RESTful API

api-flow

Requirements

  • Nvidia Docker
  • Docker compose

Usage

1. Clone repository

git clone https://github.com/gedge-platform/gs-linkhq.git

2. Edit hyperparameters (Optional)

# docker-compose.yml
version: "3.9"
services:
  ...
  agent:
    ...
    environment:
      ENV_ADDRESS: vedge
      ENV_PORT: 80
      LOG_LEVEL: ${LOG_LEVEL:-INFO}
      MEM_CAPACITY: 100000
      BATCH_SIZE: 1000 
      GAMMA: 0.99
      EPS_START: 0.9
      EPS_END: 0.00001
      EPS_DECAY: 20000
      TAU: 0.005
      LR: 0.0001
    ...

3. Build images and run containers (Training Model)

docker compose up --build

4. Start Recommendation Server

cd srv
docker build --tag linkhq:4.0 .
docker run -p 80:80 linkhq:4.0

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published