I'm trying to write a Powershell function that takes an array argument. I want it to be called with the array either as an argument, or as pipeline input. So, calling looks something like this:
my-function -arg 1,2,3,4
my-function 1,2,3,4
1,2,3,4 | my-function
It's easy enough to get the first two:
function my-function {
param([string[]]$arg)
$arg
}
For pipeline input, though, it's harder. It's easy to get the arguments one at a time in the process block, by using ValueFromPipeline, but that means that the $args variable is a single value with pipeline input, but an array if -args is used. I can use $input in the END block, but that doesn't get -args input at all, and using $args in an END block only gets the final item from a pipeline.
I suppose that I can do this by explicitly collecting the argument values from the pipeline开发者_如何学C using begin/process/end blocks, as follows:
function my-function {
param([Parameter(ValueFromPipeline=$true)][string[]]$args)
begin {
$a = @()
}
process {
$a += $args
}
end {
# Process array here
$a -join ':'
}
}
But that seems very messy. It also seems like a relatively common requirement to me, so I was expecting it to be easy to implement. Is there an easier way that I have missed? Or if not, is there a way to encapsulate the argument handling into a sub-function, so that I don't have to include all that in every function I want to work like this?
My concrete requirement is that I'm writing scripts that take SQL commands as input. Because SQL can be verbose, I want to allow for the possibility of piping in the command (maybe generated by another command, or from get-contents on a file) but also allow for an argument form, for a quick SELECT statement. So I get a series of strings from the pipeline, or as a parameter. If I get an array, I just want to join it with "`n" to make a single string - line by line processing is not appropriate.
I guess another question would be, is there a better design for my script that makes getting multi-line input like this cleaner?
Thanks - the trick is NOT to use ValueFromPipeline then...
The reason I was having so much trouble getting things to work the way I wanted was that in my test scripts, I was using $args as the name of my argument variable, forgetting that it is an automatic variable. So things were working very oddly...
PS> 1,2,3,4 | ./args
PS> get-content args.ps1
param([string[]]$args)
if ($null -eq $args) { $args = @($input) }
$args -join ':'
Doh :-)
Use the automatic variable $input
.
If only pipeline input is expected then:
function my-function {
$arg = @($input)
$arg
}
But I often use this combined approach (a function that accepts input both as an argument or via pipeline):
function my-function {
param([string[]]$arg)
# if $arg is $null assume data are piped
if ($null -eq $arg) {
$arg = @($input)
}
$arg
}
# test
my-function 1,2,3,4
1,2,3,4 | my-function
Here's another example using Powershell 2.0+
This example is if the parameter is not required:
function my-function {
[cmdletbinding()]
Param(
[Parameter(ValueFromPipeline=$True)]
[string[]]$Names
)
End {
# Verify pipe by Counting input
$list = @($input)
$Names = if($list.Count) { $list }
elseif(!$Names) { @(<InsertDefaultValueHere>) }
else { @($Names) }
$Names -join ':'
}
}
There's one case where it would error out without the 'elseif'. If no value was supplied for Names, then $Names variable will not exist and there'd be problems. See this link for explanation.
If it is required, then it doesn't have to be as complicated.
function my-function {
[cmdletbinding()]
Param(
[Parameter(Mandatory=$true,ValueFromPipeline=$True)]
[string[]]$Names
)
End {
# Verify pipe by Counting input
$list = @($input)
if($list.Count) { $Names = $list }
$Names -join ':'
}
}
It works, exactly as expected and I now I always reference that link when writing my Piped Functions.
ValueFromPipeline
You should use the pipeline (ValueFromPipeline) as PowerShell is specially designed for it.
$args
First of all, there is no real difference between:
my-function -<ParamName> 1,2,3,4
and
my-function 1,2,3,4
(assuming that the parameter $ParamName
is at the first position).
The point is that the parameter name $args
is just an unfortunate choice as $args
is an automatic variable and therefore shouldn't be used for a parameter name. Almost any other name (that is not in the automatic variables list) should do as in the example from Sean M., but instead you should implement your cmdlet assuming that it will be called from the middle of a pipeline (see: Strongly Encouraged Development Guidelines).
(And if you want to do this completely right, you should give a singular name, plural parameter names should be used only in those cases where the value of the parameter is always a multiple-element value.)
Middle
The supposed cmdlet in your question is not a very good example as it only cares about the input and has a single output therefore I have created another example:
Function Create-Object {
Param([Parameter(ValueFromPipeline=$true)][String[]]$Name)
Begin {
$Batch = 0
$Index = 0
}
Process {
$Batch++
$Name | ForEach {
$Index++
[PSCustomObject]@{'Name' = $_; 'Index' = $Index; 'Batch' = $Batch}
}
}
}
It basically creates custom objects out of a list of names ($Names = "Adam", "Ben", "Carry"
).
This happens when you supply the '$Names` via an argument:
Create-Object $Names
Name Index Batch
---- ----- -----
Adam 1 1
Ben 2 1
Carry 3 1
(It iterates through all the names in $Name
parameter using the ForEach
cmdlet.)
And this happens when you supply the $Names
via the pipeline:
$Names | Create-Object
Name Index Batch
---- ----- -----
Adam 1 1
Ben 2 2
Carry 3 3
Note that the output is quiet similar (if it wasn't for the batch
column, the output is in fact the same) but the objects are now created in 3 separate batches meaning that every item is iterated at the process
method and the ForEach
loop only iterates ones every batch because the $Name
parameter contains an array with one single item each process
iteration.
Use case
Imaging that the $Names
come from a slow source (e.g. a different threat, or a remote database). In the case you using the pipeline for processing the $Names
your cmdlet can start processing the $Names
(and pass the new objects onto the next cmdlet) even if not all $Names
are available yet. In comparison to providing the $Names
via an argument were all the $Names
will need to be collected first before your cmdlet will process them and pass the new objects onto the pipeline.
I think you can achieve this by using the input processing methods BEGIN
, PROCESS
, and END
blocks. I just ran into this. Here is my console output just playing around with this and you can see by putting the body of the function in the PROCESS
block it behaves the way you would expect
Not working
λ ~ function test-pipe {
>> param (
>> # Parameter help description
>> [Parameter(ValueFromPipeline=$true)]
>> [String[]]
>> $Texts
>> )
>> $Texts | % {Write-Host $_}
>> }
λ ~ "this", "that", "another"
this
that
another
λ ~ $s = @("this", "that", "another")
λ ~ $s
this
that
another
λ ~ $s | test-pipe
another
λ ~ test-pipe -Texts $s
this
that
another
Working
λ ~ function test-pipe {
>> param (
>> # Parameter help description
>> [Parameter(ValueFromPipeline=$true)]
>> [String[]]
>> $Texts
>> )
>> BEGIN {}
>> PROCESS {$Texts | % {Write-Host $_}}
>> END {}
>>
>> }
λ ~ $s | test-pipe
this
that
another
λ ~
精彩评论