Skip to content

Commit

Permalink
Merge pull request #14 from stephhazlitt/arrow
Browse files Browse the repository at this point in the history
Add arrow workshop + images
  • Loading branch information
mine-cetinkaya-rundel authored Feb 6, 2024
2 parents 7849af4 + 4e14274 commit e0f8619
Show file tree
Hide file tree
Showing 3 changed files with 20 additions and 16 deletions.
36 changes: 20 additions & 16 deletions workshops/arrow.qmd
Original file line number Diff line number Diff line change
@@ -1,35 +1,39 @@
---
title: Big Data in R with Arrow (Details TBD)
title: Big Data in R with Arrow
author:
- name: Instructor 1 name
affiliations:
- name: Instructor 1 affiliation
- name: Instructor 2 name (remove if single instructor)
affiliations:
- name: Instructor 2 affiliation
- name: Nic Crane
# affiliations:
# - name:
- name: Steph Hazlitt
# affiliations:
# - name:
description: |
1-sentence summary of workshop.
categories: [add, comma, separated, categories]
An introduction to Apache Arrow for creating efficient analysis pipelines with larger-than-memory data in R.
categories: [R, Arrow, Data Engineering]
---

# Description

Full workshop description goes here. Multi-paragraph ok.
Data analysis pipelines with larger-than-memory data are becoming more and more commonplace. In this workshop you will be introduced to Apache Arrow, a multi-language toolbox for working with larger-than-memory tabular data, to create seamless "big" data analysis pipelines with R.

This workshop will focus on using the the arrow R package---a mature R interface to Apache Arrow---to process larger-than-memory files and multi-file data sets with arrow using familiar dplyr syntax. You'll learn to create and use the interoperable data file format Parquet for efficient data storage and access, with data stored both on disk and in the cloud, and also how to exercise fine control over data types to avoid common large data pipeline problems. Designed for new-to-arrow R users, this workshop will provide a foundation for using Arrow, giving you access to a powerful suite of tools for performant analysis of larger-than-memory tabular data in R.

# Audience

This course is for you if you:

- list at least
- want to learn how to work with tabular data that is too large to fit in memory using existing R and tidyverse syntax implemented in Arrow

- want to learn about Parquet, a powerful file format alternative to CSV files

- three attributes
- want to learn how to engineer your tabular data storage for more performant access and analysis with Apache Arrow

- for your target audience

# Instructor(s)

| | | |
|------------------|------------------|-------------------------------------|
| ![](images/name-lastname.jpg) | | Instructor bio, including link to homepage. |
| | | |
|---------|---------|-----------------------------------------------------|
| ![](images/nic_crane.png) | | [Nic Crane](https://niccrane.com) is an R consultant with a background in data science and software engineering. They are passionate about open source, and learning and teaching all things R. Nic is part of the core team that maintain the Arrow R package, and a co-author of "Scaling up with R and Arrow", due to be published by CRC Press later this year. |
| ![](images/steph-hazlitt.jpg) | | Steph Hazlitt is a data scientist, researcher and R enthusiast. She has spent the better part of her career wrangling data with R and supporting people and teams in creating and sharing data science-related products and open source software. Steph is the Director of Data Science Partnerships with BC Stats. |

: {tbl-colwidths="\[25,5,70\]"}
Binary file added workshops/images/nic_crane.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added workshops/images/steph-hazlitt.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit e0f8619

Please sign in to comment.