Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ariExtra immigration #46

Merged
merged 51 commits into from
Oct 17, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
51 commits
Select commit Hold shift + click to select a range
70adb10
Resolve
Jul 7, 2023
d80cb3b
Add `download_gs_file()` and related functions from ariExtra
Jun 2, 2023
a678cc2
Update documentation
Jun 2, 2023
9533715
Add `pdf_to_pngs()` from ariExtra
Jun 2, 2023
6524a07
Stylistic changes
Jun 2, 2023
7cbd032
Resolve
Jul 7, 2023
8226531
Resolve
Jul 7, 2023
4b45b2f
Resolve
Jul 7, 2023
25f8bd1
Set defaults for 'model_name' and 'vocoder_name'
Jun 12, 2023
1ac3054
Use `cli_alert_warning()`
Jun 16, 2023
0cdff11
Documentation
Jun 16, 2023
0532e89
Comment code
Jul 10, 2023
86ff441
Updates
Jul 10, 2023
cbe2af8
Merge pull request #48 from jhudsl/main
seankross Jul 10, 2023
2131ede
Document model_name and vocoder_name argument in `ari_spin()`
Jul 10, 2023
22b7f13
Add `check_png_urls()`
Jul 10, 2023
98b30bd
`pad_wav()` Documentation
Jul 10, 2023
aa693cf
Use \dontrun{} around `pad_wav()` example
Jul 10, 2023
e8e495f
Documentation for `ari_spin()`
Jul 11, 2023
66f9a64
Add `pptx_to_pdf()`
Aug 15, 2023
6864eec
Add `sys_type()` and `os_type()`
Aug 28, 2023
86fd00f
Don't need `fix_soffice_library_path()`
Aug 29, 2023
60e6f25
get rid of text2speech specific code
Aug 31, 2023
96c5765
Fix `ari_narrate()` so we can get rid of text2speech
Aug 31, 2023
8ff859e
Put all the ffmpeg related arguments into a list called `ffmpeg_args`
Sep 1, 2023
63eaaae
Syntax fix
Sep 1, 2023
48d3cdf
`ari_burn_subtitles()`: Fix destination of output in system command
Sep 1, 2023
0c34a4e
Document `ari_subtitles()` and `ari_burn_subtitles()`
Sep 5, 2023
c820e21
Fix dependency issue
Sep 8, 2023
e259d89
Remove download_gs_file.R and pptx_notes.R
howardbaik Sep 14, 2023
9b595ef
Fix merge conflicts when git pull-ing from ariExtra-immigration branch
howardbaik Sep 14, 2023
1923a9b
Merge branch 'ariExtra-immigration' of https://github.com/jhudsl/ari …
howardbaik Sep 14, 2023
45119d5
Run `document()`
howardbaik Sep 14, 2023
0183e3f
Get rid of unnecessary Imports in DESCRIPTION
howardbaik Oct 9, 2023
be9b52c
Put progress bar back into `ari_spin()`
howardbaik Oct 9, 2023
35c1a60
Get rid of `print()`
howardbaik Oct 10, 2023
e6711ba
progress_bar
howardbaik Oct 10, 2023
74ad489
Create `coqui_args()`
howardbaik Oct 10, 2023
bbb8f02
Replace `tts_engine_args` with `coqui_args()`
howardbaik Oct 10, 2023
674c9cf
Made final changes to code
howardbaik Oct 16, 2023
21be0ae
Passed R CMD CHECK
howardbaik Oct 16, 2023
12f517a
Merge branch 'ariExtra-immigration' into burn-subtitles
howardbaik Oct 16, 2023
b745ff3
Get rid of default argument for `output_video`
howardbaik Oct 16, 2023
cbbeff2
Create `set_ffmpeg_args()`
howardbaik Oct 16, 2023
1b0b1eb
Build ffmpeg command to supply to `system()`
howardbaik Oct 16, 2023
6366f44
Resolve merge conflicts with `ariExtra-immigration` branch
howardbaik Oct 16, 2023
42ba661
Documentation stuff
howardbaik Oct 16, 2023
e14f959
More Documentation stuff
howardbaik Oct 16, 2023
00658f1
Merge pull request #53 from jhudsl/reduce-arguments
howardbaik Oct 16, 2023
b647663
Merge pull request #52 from jhudsl/burn-subtitles
howardbaik Oct 16, 2023
d286fde
Fix documentation
howardbaik Oct 17, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion NAMESPACE
Original file line number Diff line number Diff line change
@@ -1,12 +1,15 @@
# Generated by roxygen2: do not edit by hand

export(ari_burn_subtitles)
export(ari_example)
export(ari_narrate)
export(ari_spin)
export(ari_stitch)
export(ari_subtitles)
export(ari_talk)
export(audio_codec_encode)
export(check_ffmpeg_version)
export(coqui_args)
export(ffmpeg_audio_codecs)
export(ffmpeg_codecs)
export(ffmpeg_convert)
Expand All @@ -26,7 +29,6 @@ export(set_video_codec)
export(video_codec_encode)
importFrom(cli,cli_alert_info)
importFrom(hms,hms)
importFrom(progress,progress_bar)
importFrom(purrr,compose)
importFrom(purrr,discard)
importFrom(purrr,map)
Expand Down
13 changes: 7 additions & 6 deletions R/ari_burn_subtitles.R
Original file line number Diff line number Diff line change
Expand Up @@ -4,20 +4,21 @@
#' \code{--enable-libass} as per
#' \url{https://trac.ffmpeg.org/wiki/HowToBurnSubtitlesIntoVideo}
#'
#' @param video Video in \code{mp4} format
#' @param srt Subtitle file in \code{srt} format
#' @param input_video Path to video in \code{mp4} format
#' @param srt Path to subtitle file in \code{srt} format
#' @param output_video Path to video with subtitles
#' @param verbose print diagnostic messages. If > 1,
#' then more are printed
#'
#' @return Name of output video
ari_burn_subtitles <- function(video, srt, verbose = FALSE) {
#' @export
ari_burn_subtitles <- function(input_video, srt, output_video, verbose = FALSE) {
ffmpeg <- ffmpeg_exec(quote = TRUE)
if (verbose > 0) {
message("Burning in Subtitles")
}
command <- paste(
ffmpeg, "-y -i", video, paste0("-vf subtitles=", srt),
video
ffmpeg, "-y -i", input_video, paste0("-vf subtitles=", srt), output_video
)

if (verbose > 0) {
Expand All @@ -28,5 +29,5 @@ ari_burn_subtitles <- function(video, srt, verbose = FALSE) {
warning("Result was non-zero for ffmpeg")
}

return(video)
output_video
}
84 changes: 38 additions & 46 deletions R/ari_narrate.R
Original file line number Diff line number Diff line change
@@ -1,9 +1,7 @@
#' Create a video from slides and a script
#' Generate video from slides and a script
#'
#' \code{ari_narrate} creates a video from a script written in markdown and HTML
#' slides created with \code{\link[rmarkdown]{rmarkdown}} or a similar package.
#' This function uses \href{https://aws.amazon.com/polly/}{Amazon Polly}
#' via \code{\link{ari_spin}}.
#'
#' @param script Either a markdown file where every paragraph will be read over
#' a corresponding slide, or an \code{.Rmd} file where each HTML comment will
Expand All @@ -12,12 +10,9 @@
#' \code{\link[rmarkdown]{rmarkdown}}, \code{xaringan}, or a
#' similar package.
#' @param output The path to the video file which will be created.
#' @param voice The voice you want to use. See
#' \code{\link[text2speech]{tts_voices}} for more information
#' about what voices are available.
#' @param service speech synthesis service to use,
#' passed to \code{\link[text2speech]{tts}}.
#' Either \code{"amazon"} or \code{"google"}.
#' @param tts_engine The desired engine for converting text-to-speech
#' @param tts_engine_args List of parameters provided to the designated text-to-speech engine
#' @param tts_engine_auth Authentication required for the designated text-to-speech engine
#' @param capture_method Either \code{"vectorized"} or \code{"iterative"}.
#' The vectorized mode is faster though it can cause screens to repeat. If
#' making a video from an \code{\link[rmarkdown]{ioslides_presentation}}
Expand All @@ -26,13 +21,9 @@
#' default value is \code{FALSE}. If \code{TRUE} then a file with the same name
#' as the \code{output} argument will be created, but with the file extension
#' \code{.srt}.
#' @param ... Arguments that will be passed to \code{\link[webshot]{webshot}}.
#' @param verbose print diagnostic messages. If > 1, then more are printed
#' @param audio_codec The audio encoder for the splicing. If this
#' fails, try \code{copy}.
#' @param video_codec The video encoder for the splicing. If this
#' fails, see \code{ffmpeg -codecs}
#' @param cleanup If \code{TRUE}, interim files are deleted
#' @param ... Arguments that will be passed to \code{\link[webshot]{webshot}}.
#'
#' @return The output from \code{\link{ari_spin}}
#' @importFrom xml2 read_html
Expand All @@ -44,46 +35,43 @@
#' @export
#' @examples
#' \dontrun{
#'
#' #
#' ari_narrate(system.file("test", "ari_intro_script.md", package = "ari"),
#' system.file("test", "ari_intro.html", package = "ari"),
#' voice = "Joey"
#' )
#' system.file("test", "ari_intro.html", package = "ari"),
#' output = "test.mp4")
#' }
ari_narrate <- function(script, slides,
output = tempfile(fileext = ".mp4"),
voice = text2speech::tts_default_voice(service = service),
service = "amazon",
ari_narrate <- function(script, slides, output,
tts_engine = text2speech::tts,
tts_engine_args = coqui_args(),
tts_engine_auth = text2speech::tts_auth,
capture_method = c("vectorized", "iterative"),
subtitles = FALSE, ...,
subtitles = FALSE,
verbose = FALSE,
audio_codec = get_audio_codec(),
video_codec = get_video_codec(),
cleanup = TRUE) {
auth <- text2speech::tts_auth(service = service)
cleanup = TRUE,
...) {
# Authentication for Text-to-Speech Engines
auth <- tts_engine_auth(service = tts_engine_args$service)
# Stop message
if (!auth) {
stop(paste0(
"It appears you're not authenticated with ",
service, ". Make sure you've ",
tts_engine_args$service, ". Make sure you've ",
"set the appropriate environmental variables ",
"before you proceed."
))
}


# Check capture_method
capture_method <- match.arg(capture_method)
if (!(capture_method %in% c("vectorized", "iterative"))) {
stop('capture_method must be either "vectorized" or "iterative"')
}

# Output directory, path to script
output_dir <- normalizePath(dirname(output))
script <- normalizePath(script)
if (file_ext(script) %in% c("Rmd", "rmd") & missing(slides)) {
tfile <- tempfile(fileext = ".html")
slides <- rmarkdown::render(input = script, output_file = tfile)
}

# Slides
if (file.exists(slides)) {
slides <- normalizePath(slides)
if (.Platform$OS.type == "windows") {
Expand All @@ -92,52 +80,56 @@ ari_narrate <- function(script, slides,
slides <- paste0("file://localhost", slides)
}
}
# Check if script and output_dir exists
stopifnot(
file.exists(script),
dir.exists(output_dir)
)

# Convert script to html and get text
if (file_ext(script) %in% c("Rmd", "rmd")) {
paragraphs <- parse_html_comments(script)
} else {
html_path <- file.path(output_dir, paste0("ari_script_", grs(), ".html"))
html_path <- file.path(output_dir, paste0("ari_script_", get_random_string(), ".html"))
if (cleanup) {
on.exit(unlink(html_path, force = TRUE), add = TRUE)
}
render(script, output_format = html_document(), output_file = html_path)
rmarkdown::render(script, output_format = rmarkdown::html_document(), output_file = html_path)
paragraphs <- map_chr(
html_text(html_nodes(read_html(html_path), "p")),
rvest::html_text(rvest::html_nodes(xml2::read_html(html_path), "p")),
function(x) {
gsub("\u2019", "'", x)
}
)
}

# Path to images
slide_nums <- seq_along(paragraphs)
img_paths <- file.path(
output_dir,
paste0(
"ari_img_",
slide_nums, "_",
grs(), ".jpeg"
get_random_string(), ".jpeg"
)
)

# Take screenshot
if (capture_method == "vectorized") {
webshot(url = paste0(slides, "#", slide_nums), file = img_paths, ...)
webshot::webshot(url = paste0(slides, "#", slide_nums), file = img_paths, ...)
} else {
for (i in slide_nums) {
webshot(url = paste0(slides, "#", i), file = img_paths[i], ...)
webshot::webshot(url = paste0(slides, "#", i), file = img_paths[i], ...)
}
}

if (cleanup) {
on.exit(walk(img_paths, unlink, force = TRUE), add = TRUE)
}

# Pass along ari_spin()
ari_spin(
images = img_paths, paragraphs = paragraphs,
output = output, voice = voice,
service = service, subtitles = subtitles,
verbose = verbose, cleanup = cleanup
)
output = output,
tts_engine = tts_engine,
tts_engine_args = tts_engine_args,
tts_engine_auth = tts_engine_auth,
subtitles = subtitles)
}
Loading