-
-
Notifications
You must be signed in to change notification settings - Fork 59
DEV Internal Functions
function *SL($P=$BuildRoot)
$P - new location, literal path
It sets the current location to specified literal path.
This command is frequently called with the omitted argument in order to set the location to the build root. This is done before running user code and resolving user input paths.
function *Path($P)
$P - full or relative path
It converts the specified path to full depending on the current PowerShell location. Without arguments it gets the current location.
function *Die($M, $C=0)
$M - error message, [object] converted to [string] as "$M"
$C - error category, [object] converted to [System.Management.Automation.ErrorCategory]
The following categories are used:
0 NotSpecified
5 InvalidArgument
6 InvalidData
7 InvalidOperation
8 InvalidResult
13 ObjectNotFound
(1) *Die
is used in the engine code which is called from user code.
Unlike throw
, it throws an error which points to the caller.
And it sets the specified error category.
(2) It looks like throw
includes the message in CategoryInfo
and FullyQualifiedErrorId
.
In category CategoryInfo
it may be truncated. In v5 this may be done with ellipses in the middle (why?).
This is relatively fine for short one-liner messages. Still, why repeat the message almost 3 times?
But long and especially multi-line messages result in ugly formatting, especially when truncated.
So, *Die
is useful not just in public functions called by user code (1).
Use it in order to make errors neat (2), e.g. errors created by *Error
or with *At
.
NB:
Do not use *Die
in the main try-block, use just throw
(or Write-Error
perhaps?).
It does not trigger the main catch
, see
PowerShellTraps/ThrowTerminatingError.
Note, it is fine to use *Die
in advanced functions called from the main try-block, e.g. #67.
NB: #67
*Check
and *Amend
use *Die
in order to produce cleaner errors.
NB:
Functions with *Die
must be advanced.
But *Die
itself must be simple.
function *Run($_)
$_ - command to run
$args - remaining arguments are command arguments
It invokes commands with the current location set to the build root.
Commands are normally some user code called on various build events.
If $_
is null or $WhatIf
the command is ignored.
The parameter $_
is the only extra variable exposed to user code.
As documented, we do not hide $_
.
function *At($I)
$I - anything with the property InvocationInfo
For an object with the property InvocationInfo
(a task or an error record),
the function returns its amended position message like
At <file>:<line> char <char>
+ ...
Why Trim()
In PS v2 InvocationInfo.PositionMessage
has the unwanted leading \n
and the
line separator \n
instead of \r\n
. This is fixed in v3. But for v2 we keep
Trim()
in order to remove the leading \n
.
function *Error($M, $I)
$M - error message
$I - anything with the property InvocationInfo
It joins an error message and its invocation info position message.
Why here-string
To avoid using x-plat dependent line separator.
function *Check([Parameter()]$J, $T, $P=@())
$J - jobs to check
$T - parent task, omitted for root jobs
$P - parent tasks for cycle checks, omitted for root jobs
Plot
for each string job $_
try to find the task by name $_
if it is missing
*Die "Missing task", mind messages depending on $T
if parent array contains the task
*Die "Cyclic reference"
call *Check with the task jobs, the task, parents with this task added
NB: It is advanced for *Die
.
function *AddError($T)
$T - failed task
$_ - current error in the parent catch block
It adds the current error $_
info to Result.Errors
.
Error processing
What can be done for each error:
-
ADD
: call*AddError
-
STORE
: write it to the task object -
PRINT
: print information to output -
THROW
: re-throw to the parent catch
Errors are processed in several catch blocks
function *Task {
try { -If block }
catch {
### if-catch
### STORE and THROW to main-catch or ref-catch
}
try {
for each job {
if (job is reference) {
try { *Task job }
catch {
### ref-catch
### if unsafe THROW to task-catch
### else ADD and PRINT, continue
}
}
}
if (STORED) {
### failed and stored internally, ADD and PRINT
}
}
catch {
### task-catch
### STORE, PRINT location, THROW to main-catch or ref-catch
}
}
catch {
### main-catch
### ADD, STORE, PRINT or THROW out
}
function *My
$_ - error in the parent scope
It returns true if the current error $_
is thrown by the engine itself and
false if the error is external, i.e. thrown by user code.
When called from *Task
:
- True: log an error message with no source, inner source is useless
- False: log a message and the useful external source
When called from the main catch
block:
- True: throw a new error, source will point to the caller
- False: re-throw preserving the external source
function *Job($J)
$J - job object
It validates a job object from -Jobs
, -After
, -Before
and returns the
resolved job (task reference or action script) and the optional job data.
Simple jobs are returned as they are.
Jobs with data are created by New-BuildJob (job)
and represented by a
hashtable where the only key is the job and the value is job data.
The command throws "Invalid job" if $J
is invalid.
Valid jobs:
-
[string]
- task reference -
[scriptblock]
- action -
[hashtable]
with a single item
function *Unsafe($N, $J, $X)
$N - task name
$J - jobs to be checked
$X - optional exclude list
It checks references $J
to the task $N
and returns:
- 1: an unsafe reference is found (build should stop)
- nothing: all references are safe (build may continue)
The first time it is called with all initial tasks $BuildTask
as $J
:
if (*Unsafe ${*j} $BuildTask) {throw}
The code is probably not optimal for builds with failing tasks. But it is compact and this is better for scenarios without failures.
# check each job/reference recursively
foreach($_ in $J) {
if (<not safe call is found>) {
return 1
}
}
# <not safe call is found>
($t = ${*}.All[$_]) -and $t.If -and $(if ($_ -eq $N) {$X -notcontains $_} else {*Unsafe $N $t.Jobs $t.Safe})
# get the task object $t by its name $_; if the job $_ is a script block, it gets null, skip further checks
($t = ${*}.All[$_]) ...
# check the condition of task $t; if it is false, skip further checks, this task is not going to be invoked
... -and $t.If
# If the reference $_ is the bad task $N then check if it is not in the safe list $X (it should fail, return 1).
# Else it is another task and its trees to investigate, so call *Unsafe again with this task jobs and safe list.
... -and $(if ($_ -eq $N) {$X -notcontains $_} else {*Unsafe $N $t.Jobs $t.Safe})
function *Amend([Parameter()]$X, $J, $B)
$X - an extra task to be added to tasks specified by $J
$J - $X.Before or $X.After tasks
$B - case: 1: Before, 0: After
It is called on preprocessing for tasks with parameters Before
and After
.
NB: It is advanced for *Die
.
function *Save
It exports persistent build data to the checkpoint file.
filter *Help($H)
$_ - input task object
$H - cache used by *Synopsis, a caller just passes @{}
For an input task it gets a task help object with the properties
Name
, Jobs
, and Synopsis
. It is called in the special case:
Invoke-Build ?
Why
$r = 1 | Select-Object Name, Jobs, Synopsis
Because in PS v2.0 New-Object PSCustomObject -Property @{...}
results in
unwanted order: Name
, Synopsis
, Jobs
.
Jobs
is an array, not text (v2.9.7)
- Pros
- With many jobs the column
Synopsis
is not dropped (try "?" with Tests/.build.ps1). - Jobs are easier to process or show differently by external tools.
- With many jobs the column
- Cons
- Only a few jobs are shown by default (
$FormatEnumerationLimit = 4
). On the other hand, with many jobs text would be truncated as well.
- Only a few jobs are shown by default (
function *IO
It evaluates inputs and outputs of the current incremental and partial incremental task and returns a reason to skip, if all is up-to-date.
It is always called from *Task
, so that it does not have any parameters,
it just uses $Task
, the parent variable of the current task.
The function returns an array of two values:
- `[0]`: result code: `2`: output is up-to-date; `$null`: output is out-of-date
- `[1]`: information message
In the out-of-date case, the function stores processed input and outputs
as $Task.Inputs
and $Task.Outputs
. They are used in *Task
later.
Plot, mind *SL
before *Path
and calling user code:
${*i} = $Task.Inputs ~ invoke if a scriptblock. NB for PS v2 use @(& ..)
*SL and collect ${*p} = full input paths and ${*i} = input converted to FileInfo
if nothing then return (2, 'Skipping empty input.')
NB: *p is array, *i is array or not
if ($Task.Partial) {
*SL is not needed, it is done above and no user code is invoked after
${*o} = $Task.Outputs either invoked (with *SL after this) or cast to array
if *o and *p counts are different -> 'Different input and output'
No user code after this, we can use simple variables.
}
else {
*SL is not needed, it is done above and no user code is invoked after
invoke output (if a scriptblock), set it back to $Task.Outputs, *SL
set $Task.Inputs to *p (full input paths)
}
NB:
Replaced Get-Item -LiteralPath .. -Force -ErrorAction Stop
with [System.IO.FileInfo](*Path ..)
.
The old does not work in paths with [ ]
.
NB:
At some point replaced Test-Path -LiteralPath
with *Path
as well due to paths with [ ]
.
Later on replaced with times compared with 1601-01-01, see below.
NB: checks for Exists
#69
At some point redundant checks for `Exists` on output files were removed.
It is documented that time of a missing file is 1601-01-01.
v3.3.9 restores checks because Mono is different.
function *Task
$args[0] - task name
$args[1] - parent task path or null for a root task
This function is the engine's heart, it invokes tasks.
A task name may come as an IB parameter, its capitalization may differ from the original in a script. Thus, use it once to get the task object, then use the task name from the object.
Obtain the task object by name from the build list. Do not check for null, it should exist. Create this task path from the parent path, it is for the log.
If the task is done then log and return. Before 2.14.4 we used to check for errors before this and print a different message. It's unlikely very useful.
Evaluate If
.
If it is false then return.
If it fails then catch and store the error, then re-throw.
Process task jobs in a try
block.
CASE: Job is a task reference
Call the dependent task. If OK go to the next job.
Catch:
Call *Unsafe
.
If true then re-throw.
Else log the error with source info and go to the next job.
CASE: Job is a script block
Input of incremental tasks can be empty, so check for null. Use case: a task works for some files but it is possible that they do not exist.
# *i[0] is a flag which tells how to deal with *IO
${private:*i} = , [int]($null -ne $Task.Inputs)
# initial values:
0: not incremental task
1: incremental, *IO was not called, call it and print the result info *i[1]
# new values after called *IO
2: incremental, *IO was called, skip incremental job
$null: incremental, *IO was called, do incremental job
Why $Task
is constant
*Task
creates the variable $Task
. It is exposed for events and actions, it
is used by *Task
itself and other engine functions. Events are dot-sourced in
*Task
, so that $Task
is in the same scope. Thus, user code may change
$Task
and break normal processing. That is why $Task
is made constant.
- Concepts
- Script Tutorial
- Incremental Tasks
- Partial Incremental Tasks
- How Build Works
- Special Variables
- Build Failures
- Build Analysis
- Parallel Builds
- Persistent Builds
- Portable Build Scripts
- Using for Test Automation
- Debugging Tips
- VSCode Tips
Helpers
- Invoke Task from VSCode
- Generate VSCode Tasks
- Invoke Task from ISE
- Resolve MSBuild
- Show Build Trees
- Show Build Graph
- Argument Completers
- Invoke-Build.template
Appendix