-
Notifications
You must be signed in to change notification settings - Fork 0
Matthijs Meeting Notes Rolling Log
Meeting with Matthijs and Animesh in the morning, followed by a PhD group meeting including Alex.
- Communication delay between VMs is implemented, what values to choose
- We have 1 paper with latency values for local, regional, etc: https://dl.acm.org/doi/pdf/10.1145/3472883.3487004 (page 4)
- Try to find multiple papers with latency numbers, take the average of 2/3 papers
- Best: Do an invariance study as a survey. Take many papers which talk about this, and perform analysis. These papers are high profile, high citation. Maybe something for in the future. (names such as Carey Williamson)
- Benchmark figure
- Indicate more clearly that there can be 0/1/many VMs and containers by placing boxes behind each other
- Show the most general setup in the picture, a clear overview. Make it flexible
- Arrows aren't mirrored and perfectly aligned in the figure (both horizontal and vertical)
- Make sure each machine box looks the same. Maybe remove the mirror to highlight infinite scalability
- Benchmark API
- Define the API clearly in a table. This has the benefit that it's easier to understand for new people, and you can check if you overlooked something
- In the page budget, mark pages for tech report
- And make a new section below the main page budget showing extra tech report pages
- Mark on each page who does what
- Make the 'methodology' section a separate section. Nothing technical should be part of the introduction
- Related work: Early on you can incorporate this in the introduction. You can't make a separate section about this at the start because you haven't yet explained all technical details required to understand the comparison
- The main related work section should be at the end instead
- The 'beyond cloud computing' section should start immediately after the intro. You should not wait for 8 pages or more before explaining the main topic of the paper
- In the current 1st page budget draft, add page 7 between sec 3 and sec 3.1. Explain in 1 page what cloud computing problems are, use lots of references. Do not explain stuff that you can just reference. It's mostly about a table.
- You can talk a small bit about SLA in the cloud section when talking about metrics
Meeting with Matthijs and Animesh.
- The benchmark can be deployed on multiple physical machines to emulate more virtual cloud/edge/endpoint nodes. Edge deployment and endpoint-only deployment work, and cloud deployment will work before the weekend.
- Then add metrics, so we can get an idea on how to write the experimental section in the paper
- e.g. Data exchanged and CPU usage between cloud/edge/endpoint
- Create a container per edge node that listens to all MQTT topics and all traffic -> calculate network usage.
- Only when all deployment modes and metrics work, go use more applications than ML and IIoT
- Add network delay between VMs and containers. Requires to run the endpoint containers inside of a VM.
Meeting with Matthijs and Animesh in the morning, followed by a PhD groupmeeting including Alex.
- Update the ref-arch paper: Make a new overleaf and change to ICDCS format
- Ref-arch experiments:
- Different infrastructure: Deploy on cloud-only, edge-only, endpoint-only
- Different applications: ML, pathstore (or cstore), Clownfish (video analysis, project from Lin Wang), IIoT (Bjarne and Alessandro)
- Different metrics: Network latency / throughput, CPU and memory usage, total time etc. How do these help us to make better decisions.
- We only use KubeEdge, not other edge resource managers discussed in the paper. KubeEdge represents a large group of Kubernetes-based resource managers, and implementing more systems would take lots of time and not add much to this paper. However it is future work for a benchmarking paper or similar
- Can we present the internal state of the benchmark implementation as some nice object? Coupling physical machines with VMs etc
- If we have time left: Add a multiplexer experiment, multiple different applications instead of the current homogeneous approach. Run more than 1 app per node as we have now.
- Experimental setup: Really try to run at least 1 experiment on Chameleon. That makes our setup legit, reviewers won't question our setup. Other experiments can be run on our infrastructure, but at least the first should be run on Chameleon. Email Kate Katarzyna for access, and that I will be the technical contact point for our group
- Schedule:
- Add support for multiple physical machines
- Add cloud-only, edge-only and endpoint-only deployment models
- Add network delays between VMs (meeting with Animesh and Lin on this)
- Add more metrics and apps
- Future work
- Performance of Kubernetes scheduler: It is a 2 phase scheduler, first select candidate nodes using constraints specified by job, then select the best node using info on current usage. Does adding more constraints slow the scheduler down? Or does it become slow if you add no constraints and the pool of candidate nodes is very large
- Dynamic edge scheduler: Besides CPU and memory usage, also take network delays / throughput into account. Don't put all jobs which need to send lots of data from cloud <-> edge on the same edge node. And what about a constraint such as 'put me on an edge node 10ms from the cloud'? That's not possible at the moment.
- Possible Master thesis supervision with Jesse: Running serverless games from Jesse on my cloud/edge infrastructure. This requires the installation of OpenWhisk for example to run serverless jobs. Plan meeting with Jesse
Meeting with Matthijs and Animesh in the morning, followed by a meeting with Alex.
- Tuesday October 5: Give a small presentation (5-10 min) for the IntroCS course about what you're working on and how it impacts society to motivate students.
- Idea: Edge computing -> Low latency (see paper on this). Can be used for every application
- Story: Using web services -> running somewhere = cloud -> data centers are relatively far away -> what about 'smaller' clouds closerby
- Link to talks of previous year
- For the ref.arch, what about TinyML, TinyOS etc, are these covered by our design?
- Can we use The Web Conf (previously known as WWW conf.) as intermediate deadline before ICDCS? October 21 is deadline, but January 13 is reply which is same date as ICDCS so probably not usable.
- Benchmark: Add more metrics, time to deploy edge container, communication delay etc. What is causing this delay, try to understand this in-depth. Possibly collaborate with Erwin who is working on understanding K8s performance.
Meeting with Matthijs and Animesh in the morning, followed by a SPEC edge meeting with Auday and Alessandro.
- Auday's paper: Add the ML use-case text before the next SPEC meeting (September 24)
- Also share the source for the reference architecture image via Google Drive & notify via mail
- At some point Matthijs needs to add the reference architecture section to the paper, but how should it be made different compared to the reference architecture paper itself?
- For the new ref.arch paper we replace the application-specific architecture section with experiments using the edge benchmark, but we need applications for the benchmark. Matthijs works on an ML application, Alessandro / Auday / Bjarne can provide an IIoT application
- Meeting at 17 September to further discuss this
- What applications can further be added? What is already out there? Maybe someone in the SPEC edge group has something
- Matthijs will show a demo of the edge benchmark in 2 weeks (September 24)
Matthijs, Animesh and Alex on what to do next after the reference architecture paper got rejected at SEC.
- The reviewers did like the paper, and thought it was a novel idea, however an implementation of some sorts is required according to them for SEC.
- Next target: ICDCS conference (deadline most likely January 2022), and if that fails TPDS journal.
- Most likely a 11 page limit including references.
- Implement the feedback from the SEC reviewers
- Shorten the references to 1 page
- Replace the application-specific architecture section with a benchmark part for which you then have ~3 pages
- Matthijs is currently working on a benchmark suite for cloud-edge-endpoint. This allows one to deploy applications on cloud/edge/endpoint and get both system (KubeEdge) and application (inside container) level metrics.
- Prepare a ML terraform / KubeEdge benchmark for the paper, and build this system on top of the reference architecture (figure 2 in the paper). Map to all boxes of the architecture
- Be able to run cloud-only, edge-only, endpoint-only and combinations between those to relate to the reference architecture. For this we will use Tensorflow and Tensorflow-lite. The goal is to show what the trade-offs are, so show all possibilities and do not make a Tensorflow vs Lite argument ourselves.
Matthijs, Animesh and Alex discuss the just submitted reference architecture paper (what went right / wrong), and what to do in the next couple of months.
- The NWO Klein grant proposal has 3 requirements for Matthijs: A reference architecture (which we already did), an offloading model (what are the dimensions for edge continuum offloading) and a RM portfolio scheduling system.
- Over the summer Matthijs will first work on a benchmark / experimental setup because this is needed to run experiments of future papers. This benchmark is part of the task description of an academic programmer (which we still need to hire), we should discuss with Henri / Lin how we should handle this now that Matthijs is already doing such a benchmark. This academic programmer can be used for other purposes.
- After the benchmark we will focus on the offloading model, and aim for a workshop paper at the end of this year / start of next year. We will make a case why a new RM&S scheduling solution is needed (because of the edge continuum), and show that we are working on this.
- Alex: Maybe we want to focus on serverless for the edge. The serverless model is underexplored for edge so this gives us novelty.
- Animesh: We want to incorporate ML techniques in the RM&S system as Matthijs has experience with ML.
- Fernando Kuipers runs the TU Delft Living Lab, they have infrastructure for 3/4/5G IoT / Edge. Mail him at some point, explain that we also do edge stuff and that Alexandru Iosup told us vaguely about what Fernando is doing. It would be nice to have access to other edge infrastructure outside of our own.
Meeting with Animesh to discuss the paper narrative
- Main contribution: We present a unified reference architecture for compute offloading in the edge continuum, showing how popular offloading paradigms such as edge and fog computing can be mapped onto a single model.
- Remove the resource management part. That's just a part of the paper, not the main thing.
- Section 2: Motivation (for a unified model). Merge the current system model chapter (workload, resources) with an explanation / comparison of offloading paradigms (big table, what are the differences / commonalities between the paradigms) and related work (the related reference architectures)
- Goal: Show how the offloading paradigms compare to each other. Given the commonalities between the paradigms we can create a single unified reference architecture.
- Section 3, reference architecture: This is how the unified model looks like. Explain how each offloading paradigm comes into play (is missing at the moment)
- Section 4, mapping: Why do we compare resource managers, not a different part of the architecture -> talk about single tenant / multi-tenant, we focus on single-tenant so we do not have to dive deep into that part for now. Instead we focus on RM, as edge computing is all about efficiently offloading workload on restrained resources.
- More emphasis on how the systems map to the offloading paradigms. Possibly add columns to the table like 'common use case', and expand upon the 'main feature', make it more systematical.
- Section 5, application-specific: The goal is to show how an application using a specific offloading paradigm can map to our architecture.
- For industrial-IoT this is relatively simple, because the deployment / infrastructure is tightly coupled with the application.
- ML can be used in many different ways, so explain for each thing for which paradigm it can be used, and how.
- Guidelines: We will do this at the end. Used to finish the story: We have presented a unified model, and made systems and applications onto it -> what have we learned from it, how can this help others.
Weekly progress meeting with Animesh.
- Paper writing, I am currently working on the application specific architecture. 9/12 pages are filled.
- See Slack for the document
- Animesh will rewrite the introduction. Currently it's not very clear what I'm trying to sell in the paper. Think of a short sales pitch of the paper, what is the most important contribution.
- Maybe remove resource management from the title of the paper? It's not only about resource management, but more about the edge continuum (that should be the selling point compared to previous work).
- Define clearly what offloading is, and whereto you can offload.
- Give more examples in the edge reference architecture. Show how components are different between endpoints and edge nodes.
- Networking section: Spend only a single paragraph on table 1, spend the rest of the page on table 2. And leave out the high level introduction (what application level networking is, what TCP is etc.)
- Also look into application level services like nats.io (Kafka like framework for the edge)
- Change the mapping section into three subsections: One for public cloud, one for open-source, one for academia. First explain their common components, then only the differences. Currently you're repeating too much.
- Remove details (e.g. 'MQTT is used for communication'), add how the systems are mapped (see table), that's currently ignored in the text.
- Mapping -> overview, change to something like 'takeaway lessons'. Overview looks like it can safely be ignored.
- Definition of resource management: Can be used by multiple tenants (like a driver). Something inside an individual application, like an efficient algorithm which uses the driver API, isn't part of resource management.
- Email Alessandro tomorrow with a draft which includes the ML application specific architecture. Ask if he can help write a similar section but for industrial applications.
- For next week's meeting: Have a concept version of the paper ready, all pages filled out, no summations anymore. Then we have 2 weeks left to streamline and improve the paper.
Weekly progress meeting with Animesh and Alex.
- Paper writing, up to the application specific architecture section will be finished tomorrow (first 7 pages)
See todo comments in the pdf for all details: Paper Repository
Change the title to something like "A Reference Architecture for the Edge Continuum and its Implications (for ...)"
- Do we still want to use the term edge continuum? Maybe compute continuum? Or no continuum at all?
Introduction
- Clouds are not that far away (I hosted a training session on this last week). "The latency to the edge is much lower than to the cloud" is not always true anymore.
Reference architecture
- Endpoints do data sensing. Edge nodes do data aggregation. Clouds do data processing. But this depends on the paradigm:
- Endpoints can do data processing with mobile crowdsourcing
- Clouds can do data aggregation with mobile cloud computing.
- Add a new 3.1 overview. Present an overview of the architecture
- Argue why we go in depth in the following sections
- Split sec3.3 cloud up into multiple sections
- Figure 2: Add subsection numbers so it's easy to see how the figure and text are linked
- Call sec3.4 "Networking and storage" or similar. Talk about storage, even if it's super short. Refer to some papers. If there only are application specific solutions, talk about it in the application specific architectures section
- How to extend the text: Talk about more application level choices (synchronization, group communication etc). Maybe ask Lin.
- Figure 1 / 2: Label network arrows, refer to it in the text.
- For networking, add Terahertz (THz), VLC, 6G(?), radio / microwave, Quic. These are the SotA.
- Table 1: Add typical latency
- Table 2: Why these protocols. Query them, do they have many references? Otherwise ask Jan Rellermeyer
- Explain what QoS is, refer to industry standards.
Application specific
- Go one level deeper than the general architecture / mapping. Think about the components inside the boxes.
Guidelines
- Change the name to challenges to align with the new title
- Ask Alessandro Papadopoulos in ~10 days to have a look at a draft of the paper, especially the industrial IoT specific architecture.
Weekly progress meeting with Animesh. We skipped the last one because of Ascension Day.
- Paper writing, reference architecture section is almost done, started working on mapping
- KubeEdge: Same as last time, didn't have much time to work on it.
Paper
- Reference architecture
- Currently there are too many boring facts. We need more "why" instead of "how".
- For each subsection, add a part on "implications and open challenges". What should edge nodes (example) do, how is it currently done (see reference architecture), what are the implications of the current solutions and what are the open challenges.
- Why is it a challenge, why is it needed. What do we still need to solve.
- Make it half survey, half vision / implication
- Example paper for this approach: granular computing paper from HotOS, stanford, 2 years ago
- What would be ideal (e.g. all user code gets automatically deployed without a user needing to do anything), what needs to be done to reach that
- Reference architecture networking
- What are the options, what are the differences (with figure). Which are used / how popular
- Mapping
- Ignore application / engines in the mapping, see the application specific architecture section
- In the table, add typical application domain (ML, video processing)
- Remove the heatmap as all systems map to the same small subset of components
- App specific architectures
- Pick a use case: ML at the edge. What is the state-of-the-art, what is needed
- Example: splitting model between endpoint / edge and cloud
- Second use case: industrial iot related
- Pick a use case: ML at the edge. What is the state-of-the-art, what is needed
- Guidelines / open challenges: what is the ideal situation, what are the challenges we need to face.
- Automate the deployment as much as possible using the Docker API and Kubernetes API (https://kubernetes.io/docs/reference/using-api/client-libraries/)
- Use a local timing server to which all nodes can connect and send (timestamp, device, event)
- Finish up to page 7 (reference architecture + mapping section). For all other pages / sections / subsections, add short descriptions on what the content should be like.
- KubeEdge (~ 2 weeks): Have a simple benchmark ready. Given 1 cloud node, 1 edge node and 1 endpoint running 1 application, give a timeline breakdown (deploy, communication, computation latency).
Weekly progress meeting with Animesh. We skipped the last one because of the many Holidays we had.
- Reference architecture paper: I have processed the feedback from Animesh on the paper. The introduction is better, the system model is shorter. I am working on the reference architecture chapter still.
- KubeEdge: KubeEdge works, and I am working on an example application. An endpoint sends frames over MQTT to an edge node, which receives frames one by one, and uses TFLite ML image classification. The produced result is currently ignored, in the future something should be done with it
- The reference architecture page 2 visualization looks better than before
- Mapping section: Map general resource managers / edge systems, see [1] [2]. Pick ~ 10-12 systems from the reading list. On page 6 a table like [1][2], on page 7 a heatmap like fig 5 [2], and do the general explanation like [1].
- Application specific architectures: For now focus on 1, the machine learning app I am working on. Explain what the challenges are for machine learning inference at the edge, then map KubeEdge + application to the reference architecture pipeline (the heatmap version without the heatmap, not the page 2 figure). Then do some basic experiments with 2 different datasets, show latency / throughput.
- Do we want to build on top of KubeEdge or start something ourselves?
- Either way, we use it as baseline for what we build later
- Goal: Build something like KubeEdge, but you can also use the cloud to run stuff (which you can't with KubeEdge). For the cloud, use Kubernetes as we can argue that that is the best system, and us making something new doesn't make sense. For the edge, present the entire edge (all nodes) as a single big resource to Kubernetes, so when we deploy something on Kubernetes for the edge it goes to that single device. That device then contains our code (resource management for edge, scheduling etc)
- Application: Something that uses both endpoints (which we don't manage, they only offload), edge and cloud. Example: Endpoints send frames from a video to the edge, the edge nodes do ML inference, send the result to the cloud which then makes a decision (and sends that decision to the edge?)
For next week's meeting
- Finish the reference architecture section
- Finish (partially) the mapping section
- Make the ML application work on KubeEdge
For a later week
- Benchmark Kubernetes. Makes you understand how Kubernetes works, and what the container throughput is it can support on the control plane. The control plane only because we are not interested in benchmarking the underlying hardware. So we probably need some abstractions for the pods otherwise we can't run enough workload to overload Kubernetes' control plane. Software we can potentially use: KubeBench and [1] [2]
Weekly progress meeting with Animesh
- Reference architecture paper: I have 5 / 12 pages now, these are the introduction, the system model and part of the reference architecture sections. Progress is going well.
- KubeEdge: There was a new bug suddenly, but the devs have fixed it thanks to a GitHub issue. A 'hello world' example still isn't working, but I am getting closer by the day.
- I have read papers from and attended presentations at ICPE. I have also read quickly through the EuroSys program.
- Some work on the ATDS 2022 course needs to be done (we talked about this last Monday). Find 6 good artifacts, workload has been split between me an Giulia
- Reference architecture paper
- Reference architecture section: Keep the cloud part short, that isn't interesting / new. We focus on edge and endpoints. Add some good references instead. This section is getting a bit too big according to the page budget, but ignore that for now.
- Evaluation: maybe include some KubeEdge things. Explain the architecture, not results (that is for a next paper). Talk a bit more in-depth about this example, and do 1/2 other examples more high level.
- KubeEdge microbenchmarks
- Create a microbenchmark for KubeEdge as baseline for its performance
- Add a script to the cloud VM, ask KubeEdge to deploy X (1/10/100/1000 etc) containers at some edge node. The containers are small dummies.
- How to get time data: Create a separate timing server. The microbenchmark script sends a message to this server when it asks KubeEdge to deploy containers. Each container at startup sends a message to the server, then maybe sleeps for some time and stops. You can use this timing data to reason about how long it takes KubeEdge to startup containers (and how it handles overloading).
- You can add this partially to the paper. This is the KubeEdge architecture, we have actually deployed it and benchmarked it for verification, this is the code for that.
- After that you can add a TC script to the container to emulate wifi / 5g network traffic between containers (and packet drop)
- More writing as usual
- Make the hello world example working on KubeEdge as soon as possible
Weekly progress meeting with Animesh
- Progress on KubeEdge has been made: It is deployed locally on some KVMs, and a manual has been created on how to use and install it: https://github.com/EdgeVU/group-notes/wiki/KubeEdge. The 'hello world' application does not work yet, more debugging is needed.
- Reference architecture paper: System model section is finished, starting with the reference architecture section. Animesh will do a quick read through the introduction and system model sections to see if the general tone is ok.
Weekly progress meeting with Animesh
- Talk about the reference architecture paper and KubeEdge
Meeting with Animesh and Alex about the Edge Resource Management Reference Architecture paper
- Progress on the reference architecture paper. Reading list is finished, added page 2 figure to the paper and started writing more.
- Ask Giulia for the reviews she got for her last paper submission, use that feedback
- Add more figures / tables to the paper: You want to avoid walls of text
- In the reference architecture section add a new figure which describes data / program flow between all components of the edge continuum
- Hint to next and future work in the paper (that you will be doing). For example in the application specific architectures or in the guidelines (suggest possible experiments). But don't do too much, otherwise the future work isn't needed anymore
- Guidelines section: what type of research is overexplored, what is underexplored. Create a heatmap similar to https://arxiv.org/pdf/1808.04224.pdf figure 5
- Do related work at the end of the paper with a table ('previous work did ..., we did ...')
- More writing :D
Weekly progress meeting with Animesh
- Progress on reference architecture paper -> Reading list is almost finished, page 2 figure has been updated.
General comments
- Introduction: Write the last paragraph of the introduction, what is the tone of the paper. Edge computing is here, but things are confusing, so we give a systematic overview of the filed with a reference architecture.
- Contributions
- General reference architecture for edge computing
- We provide applications specific reference architectures for ...
- We propose some guidelines for edge apps / systems based on our reference architectures. Maybe present a decision making flow chart, what kind of architecture does a developer need? Can he just use cloud computing?
- Resulting structure of the paper: Introduction, background, general reference architecture, application specific reference architectures, guidelines
- Make a document with everything you have done / are doing related to your PhD for the go / no-go decision at the end of year 1.
Add these papers to the reading list
- Towards Efficient Edge Cloud Augmentation for Virtual Reality MMOGs., SEC 2017
- SEC 2019 CSPOT: Portable, Multi-scale Functions-as-a-Service for IoT, https://sites.cs.ucsb.edu/~rich/publications/cspot-sec19.pdf
BSc theses
- Weekly group meeting with all BSc students on Thursday. Join the first meeting, see if it helps that I'm there, decide afterwards what to do.
- I will be involved with Richard's thesis.
- General plan: Build simple stuff -> test simple stuff -> only then add fancy stuff.
KubeEdge
- Do we want to use KubeEdge and built on top of it, or build or own stuff -> use KubeEdge for now, see how difficult it is, and how impactful making extending KubeEdge would be. Examples: Model resources better, to more research intro heterogeneity.
- How to benchmark KubeEdge
- Compare to other systems: Nuclio, OpenFaaS (other Kubernetes related systems). Also choose something non-kubernetes related (edgeXfoundry, academic systems).
- Pick a few benchmark applications, then some interesting metrics (basic performance, memory overhead, speed of instantiation, scalability of maintaining meta data, data structures, decision making time)
- Contribution: Systematically compare edge benchmark frameworks (mini-survey, theoretical exercise). Explain why they are suitable / unsuitable (in-depth, timeline etc for KubeEdge), identify gaps. Third: the benchmark / experiments
- Potential submit to eurosys, icdcs, nsdi. Should be heavy into coding around June.
- When KubeEdge is up and running, make a baseline, write the benchmarks. Automation scripts in Bash / Python, benchmarks themselves in a lower level language like C or Go. What are the API options for KubeEdge (probably mostly REST interfaces, so any language will work).
- Potential MSc thesis: Add webassembly support to KubeEdge (integrate sledge into KubeEdge). Add this to the large thesis document if you think it is feasible. The student should be comfortable with GoLang.
- Redo the introduction. Leave the background section for now, that can be filled in later.
- Consider the reading list as finished for now, stop aimlessly adding papers.
- Add the page 2 figure to the paper, start mapping the papers from the reading list to the figure. Introduction / background / general reference architecture should be the first 4-5 pages of the paper.
- Gradually start working on KubeEdge. Understand what is does, compile it, deploy it, fork to our Git repository, make a hello world example -> 1 / 2 weeks
- Try to make a timeline of when what happens in KubeEdge, and benchmark each part -> in 2 / 4 weeks
Meeting with Animesh and Alex about the Edge Resource Management Reference Architecture paper, specifically about the reference architecture figure Matthijs prepared.
This reference architecture is distinct from others because:
- It should be a superset of all edge-related systems / already published reference architectures. Examples of these are:
- Mobile cloud: only mobile nodes, in a p2p network, which can offload to each other
- Mobile edge computing: mobile + edge nodes, focuses mostly on network side
- Mobile edge cloud: the edge as an extension of the cloud, edge nodes can be data centers
- Call the collection of mobile-edge-cloud either the cloud continuum or edge continuum. These other compound terms with mobile, edge and cloud are too confusing
- It should be application focused. Pick apps from a few domains (3-5: ML, gaming, video streaming, general sensing), and these should all map onto (the boxes in) the reference architecture. The architecture should have all relevant components for these applications. Think about what a TensorFlow app needs to be offloaded to the edge and run there for example
- Come up with a publication plan
- Write a good paper, push into a conference. Aim for Symposium on Edge Computing (SEC), abstract deadline 17 June, paper deadline 24 June. http://acm-ieee-sec.org/2021/call%20for%20papers.php
- Afterwards, you can adapt / summarize the paper and push for CACM or some magazine
- Finally, try to publish a technical report with the SPEC group, so the reference architecture is SPEC endorsed.
- Update the figure taking today's meeting into account (the details haven't been mentioned here)
- Try to finish the reading list soon
- Have a meeting with Animesh regularly, and with Alex in ~2 weeks
Meeting with Animesh on the reference architecture paper
- Made a first version of the introduction and background for the edge reference architecture paper
- Did background research, mostly about low level hypervisor / container / OS
- Remove 'serverless' from the title. Serverless is just one of the options in the ref arch.
- Introduction
- Par 1: Introduce edge computing. Don't use more text (which is the case at the moment). People already know what edge computing is.
- Par 2: What has changed "however, serving so many users in the edge is challenging because the edge has constrained resources compared to the cloud"
- Par 3: What is the problem caused by the shift. Possibly a summation of problems.
- Par 4 / 5: This paper
- Page 2 diagram
- Based on: The SPEC-RG Reference Architecture for FaaS
- Bottom layer: Resource (hardware) layer. Consists of vm / container / language vm / bare metal(?)
- Middle layer: Application management. Storing apps, deploying them etc (like docker)
- Top layer: Workflow / Application specific workloads (ML, gaming). Also includes systems that consider sensor-edge instead of only edge, or connections between cloudlets, or edge-cloud.
- Plan a meeting with Animesh and Alex for next week
- (Monday March 15th 9h-12h or Tuesday march 16th all day except 12h-14h?)
- Make a page 2 figure of the reference architecture
- Make a list of papers to map to the reference architecture (should cover all aspects of the ref arch)
- Add the list of papers to the EdgeVU GitHub wiki
- Think about how to do an evaluation of the ref arch. How to validate it?
- Link the Overleaf Latex project to this GitHub
https://docs.google.com/document/d/1XFJ1j1r7tpaFbAL3IKtIRcniwxL39bURl1GjrOWJDEE