How To Use the Nimbella Command Line Tool

This document is organized as follows.

Downloading and Installing the Nimbella CLI

In the following instructions we assume your intent is to install the Nimbella CLI as a command to be invoked from shells or scripts. Below we discuss how to install as a dependency of another npm package using npm or yarn. We don’t recommend installing globally with npm or yarn.

When installing for shell invocation

For mac and windows we provide installers. To download, click

After downloading, you must execute the provided installer.

For linux we provide a scripted install. Use

curl | sudo bash

Regardless of the operating system, when the install completes, do

nim update

This will first of all verify that nim is installed and capable of self-updating. In most cases, it will say that it already has the latest version. However, occasionally, the initial install may be of less than the latest version and the update step will correct that.

Installing as a dependency

Since nim is implemented as an npm package it is also possible to install it with npm or yarn but we recommend this only for situations where you do not want a global install but rather are making nim a dependency of some other package.

npm install


yarn add

When installation finishes, you can execute nim locally to the package into which is has been incorporated by using

npx nim ...

When installed in this way, nim update will not work: you have to do a fresh install to get later versions.

Introducing the nim command

The Nimbella Command Line Tool (nim) is your primary portal to Nimbella services. Typing nim at a command prompt will get you something like the following.

> nim
A comprehensive CLI for the Nimbella stack

  nimbella-cli/0.1.3 darwin-x64 node-v10.16.3

  $ nim [COMMAND]

  action        work with actions
  activation    work with activations
  auth          manage Nimbella namespace credentials
  doc           display the full documentation of this CLI
  help          display help for nim
  info          show information about this version of 'nim'
  namespace     work with namespaces
  package       work with packages
  project       manage and deploy Nimbella projects
  route         work with routes
  rule          work with rules
  trigger       work with triggers
  update        update the nim CLI

The commands divide into four categories.

Openwhisk Entity Management commands

The action, activation, namespace, package, route, rule and trigger commands each manage the corresponding kind of entity as defined by Apache OpenWhisk. Nimbella powers the “serverless computing” portion of its offering with a modified version of OpenWhisk. The syntax for these seven commands approximates that of like-named commands of the wsk binary provided by the Apache OpenWhisk project, except that route is used in place of api (the implementation for these commands is derived from the Adobe I/O runtime opensource project). If you are used to using wsk, note that the project command of nim is not a replacement for wsk project (see About Nimbella Projects).

Supporting commands

The doc, help, info and update commands provide supporting services in either explaining how to do things or updating the CLI to a later version. Note that nim update works only when nim was installed using the recommended installation method for use from a shell (it does not work when nim was installed using npm or yarn).

Credential Management

The auth subtree gives you management of Nimbella credentials, that is, access to specific Nimbella namespaces.

 > nim auth
Manage Nimbella namespace credentials

  $ nim auth:COMMAND

  auth:list    List all your Nimbella Namespaces
  auth:login   Gain access to a Nimbella namespace
  auth:logout  Drop access to a Nimbella Namespace
  auth:switch  Switch to a different Nimbella namespace

Notice the use of colon separators between segments of a command name. This happens because nim is based on oclif (the Open CLI Framework from Heroku). While oclif regards colons as canonical, we have logic that usually permits you to use blank separators as in most popular CLIs.

 > nim auth list

The nim command never reads or writes ~/.wskprops, the way the wsk binary does, but replaces it with a more flexible “credential store.” The nim command does maintain the file ~/.nimbella/wskprops in synch with the credential store. This file has the same format as ~/.wskprops and applies to the currently selected namespace.

Thus, if you are using wsk with some other OpenWhisk installation and using nim with the Nimbella stack, they will not interfere. If you want to use wsk with the Nimbella stack, you can, but you should set the environment variable WSK_CONFIG_FILE=$HOME/.nimbella/wskprops so that wsk will use it instead of the ~/.wskprops. This will not affect nim, which ignores WSK_CONFIG_FILE. If you sometimes use wsk with a different installation and sometimes with Nimbella you will have to change the environment accordingly.

Although nim uses the OpenWhisk nodejs client internally, it takes steps to nullify the effect of any __OW_* variables in the environment to prevent collisions with other uses of the client.

For more information on managing the credential store see Nimbella Accounts and Login and Managing Multiple Namespaces.

Project Level Deployment

The project command has two subcommands, deploy and watch which operate on logical groupings of resources (OpenWhisk entities, web content, storage, etc) that make up typical applications. Such a grouping is called a project. We often use the term the deployer for the parts of nim that operate on projects. Much of the rest of this document concerns itself with projects, hence with the deployer.

 > nim project
manage and deploy Nimbella projects

  $ nim project:COMMAND

  project:deploy  Deploy Nimbella projects
  project:watch   Watch Nimbella projects, deploying incrementally on change
 >   nim project deploy
Deploy Nimbella projects

  $ nim project:deploy [PROJECTS]

  PROJECTS  one or more paths to projects

  -v, --verbose      Verbose output
  --apihost=apihost  API host to use
  --auth=auth        OpenWhisk auth token to use
  --debug=debug      Debug level output
  --env=env          path to environment file
  --help             Show help
  --incremental      Deploy only changes since last deploy
  --insecure         Ignore SSL Certificates
  --target=target    the target namespace
  --verbose-build    Display build details
  --yarn             Use yarn instead of npm for node builds
 > nim project watch --help
Watch Nimbella projects, deploying incrementally on change

  $ nim project:watch [PROJECTS]

  PROJECTS  one or more paths to projects

  -v, --verbose      Verbose output
  --apihost=apihost  path to environment file
  --auth=auth        OpenWhisk auth token to use
  --debug=debug      Debug level output
  --env=env          path to environment file
  --help             Show help
  --insecure         Ignore SSL Certificates
  --target=target    the target namespace
  --verbose-build    Display build details
  --yarn             Use yarn instead of npm for node builds

About Nimbella Projects

A project contains actions and associated web content to be deployed together into a Nimbella host so that they are visible to your end users (to the extent that you wish). We use the term action for a serverless function, following Apache OpenWhisk terminology, because the Nimbella stack builds on OpenWhisk.

In Nimbella, as in OpenWhisk, the unit of authorization is called a namespace. As in all OpenWhisk deployments, a namespace contains actions, optionally grouped into packages. (OpenWhisk has additional entities called rules, triggers, routes (aka “API gateway”), and activations; the nim command supports these individually but, currently, not as part of a project).

Going beyond OpenWhisk, a Nimbella namespace also contains other resources, such as object store buckets for web content and database instances, that are managed as part of the namespace. In Nimbella Accounts and Login, we explain how to obtain your first namespace and in Managing Multiple Namespaces we discuss how to obtain and manage additional ones.

This document won’t explain serverless computing, or OpenWhisk, in detail, but will supply links to OpenWhisk web pages when it seems that that might help.

Again, a project is simply a grouping of actions and web content that is intended to be deployed (“installed”) as a unit.

A feature that sets nim project apart from many other deployment tools is that no “manifest” or “configuration file” is required in a large number of suitably simple cases. You simply choose a directory in the file system to represent a project and layout the content of the project under that directory using a structure that nim will recognize as a project.

After describing how to log into Nimbella, we explore the simplest case, projects containing only actions and no build steps. We then discuss adding web content to a project. After that we show how to add build steps to individual actions or the web content of a project.

Because nim project can’t always avoid the need for a configuration file, we summarize how to add more information, using a configuration file to guide nim when the file and directory structure does not convey everything it needs to know.

Nimbella Accounts and Login

If you have previously used the Nimbella Workbench and issued the login command there, the steps in this section are unnecessary.

In order to deploy a project (or, for that matter, to use many other nim capabilities), you must have permission to use a specific namespace. The current means of obtaining this permission is to visit the Nimbella Early Access Request site, provide a small amount of information, and wait for an email response containing a login token (a very long mostly hexadecimal string). Then, you use nim auth to activate your namespace.

 > nim auth login <a very long hexidecimal string provided by Nimbella Corp>
stored a credential set for namespace '...' and API host '...'

The place where nim stores credentials will be called the credential store in this document. It is shared between nim and the workbench. You should only need to do login once for each namespace (whether this is in the workbench or nim).

Assuming nothing goes wrong you should be able to view the credential store as follows.

 > nim auth list
Namespace            Current Storage  Redis API Host
<your namespace>        yes     yes    yes   https://...

As the format implies, you can have multiple namespaces as detailed further in Managing Multiple Namespaces.

The initial namespaces provided by Nimbella have storage and redis by default.

Setting up a ‘no-configuration’ project (actions only)

A project containing only actions (with no web content or build steps) is especially easy to set up. We will start with the very simplest case, where every action has its code contained in a single file.

Actions as Single Files

> mkdir -p example1/packages/demo
> cp hello.js example1/packages/demo
> nim project deploy example1

Result of deploying project '.../example1'
  to namespace '...'
  on host ''
Deployed actions:
  - demo/hello

The example assumes you already have hello.js containing the complete source to an action (for more information about actions, see Apache OpenWhisk documentation).

The deployer names the action based on the file name (stripping the suffix) and prepending the package qualifier based on the name of the package directory. In the example, the action has the (package qualified) name demo/hello.

If you want an action to have a simple name (no package qualification) you put it in a package directory called default. In that case, there will be no package qualifier prepended.

The deployer determines the kind of runtime required for the action from the file suffix. In the example, the deployer uses the nodejs:default runtime (inferred from the suffix .js). Runtimes currently supported by Nimbella are nodejs (suffix .js), python (suffix .py), java, swift, php and go. For Java we support suffixes .java for source and .jar for a pre-built JAR file.

About Project Structure

As already illustrated, a project has a root directory, within which a certain small number of directory names are significant to the deployer. Anything else in the root directory will be ignored by the deployer. So, you can put documentation there, and also directories that will be used by features of the deployer (like building) to store things that need to be “off to the side.”

Within the root directory is (among a few other things) the packages directory. In this directory, each subdirectory represents a package. Therefore, you can’t put other directories (that aren’t packages) there. However, you can put files there, and they will be ignored by the deployer.

Each subdirectory of packages (representing a package) is assumed to contain actions. As we will see, actions can be represented by either files or directories. Therefore you need to avoid putting either files or directories there unless they represent actions.

Deployer Record Keeping

The deployer will record its latest status in a subdirectory of the project called .nimbella. All files in the .nimbella directory are generated by nim and should not be edited by you. If your project is under git control, the entire directory should (probably) be listed in .gitignore. Currently, all status is recorded in a single file called versions.json, whose contents should look something like this.

    "apihost": "https://...",
    "namespace": "...",
    "packageVersions": {
      "demo": {
        "version": "0.0.1",
        "digest": "ab87f791f2d2..."
    "actionVersions": {
      "demo/hello": {
        "version": "0.0.3",
        "digest": "ca5b7a03c1bb..."

The versions.json file can be used to compare what is actually in your namespace with what the deployer last deployed from this physical copy of the project. OpenWhisk increments version numbers for actions and packages on each update, and the deployer records the last-deployed version locally. For example, if you later detect that demo/hello is at version 0.0.2 while the deployer last deployed version 0.0.1, this means that the action was updated outside the deployer or by some other project or copy of this project. Disambiguating these cases may require further inspection of the deployed action.

As you can see, the entry for the package and for each action also includes a digest field. This is used to control incremental deploying, described in a later section.

If you request one of the options to clean an action, package or namespace prior to deploying (see Adding Project Configuration), then the version numbering of the “cleaned” action may start over again at 0.0.1.

OpenWhisk supports annotations on actions and packages. The deployer generates an annotation of its own in each action and package it deploys.

> nim action get demo/hello

    "namespace": ".../demo",
    "name": "hello",
    "version": "0.0.1",
    "annotations": [
            "key": "deployer",
            "value": {
                "repository": "...",
                "commit": "...",
                "digest": "...",
                "projectPath": "...",
                "user": "..."

The details vary according to whether the deployed project is under git control. If the project is managed by git then

If the deployed project does not appear to be under git control, then the repository and commit fields will be omitted, the projectPath will be absolute, and user will be the local user name according to the operating system.

If you deploy to different namespaces or API hosts at different times, the array in versions.json will have more than one entry, with versions for the last deployment to each distinct API host / namespace target.

Actions are “web” actions by default

Every action produced by a ‘no-configuration’ project will be what OpenWhisk calls “web action”. This means the action is publicly accessible via a URL. The URL can actually be reconstructed from the API host, namespace, and package-qualified action name, but this is time-consuming. You can use the nim action get command to retrieve the URL of a web action, as in

> nim action get demo/hello --url

There can be good reasons why you don’t want your actions to be web actions. However, to label actions as non-web requires the use of configuration as explained below.

Of course there can be more than one action

Adding more (single file) actions to a project is easy. Just create more package directories, as needed, and add the actions to them. Assuming example1 as shown previously

> mkdir example1/packages/admin
> cp adduser.js example1/packages/admin
> mkdir example1/packages/default
> cp sampleJavaScript.js welcome.js example1/packages/default
> mkdir example1/packages/test
> cp work0.js work30.js example1/packages/test
> nim project deploy example1

Result of deploying project '.../example1'
  to namespace '...'
  on host ''
Deployed actions:
  - admin/adduser
  - sampleJavaScript
  - samplePython
  - welcome
  - demo/hello
  - test/work0
  - test/work30

There is no limit on how many packages and actions can be in a project; ideally, a project will represent a logical unit of functionality whose boundaries are up to you. The default behavior of the deployer (to deploy everything in the project) can then be somewhat time-consuming. The (incremental deployment)[#Incremental] option is designed to overcome that problem.

An alternative to creating large projects is to create small ones. The nim project deploy command accepts a list of projects in a single invocation.

> nim project deploy example1 example2 ...

Of course, having lots of small projects complicates building and you only get fine-grained behavior by specifying the projects manually. The incremental option allows you to have “right sized” projects without overly long deployment steps during iterative development.

Zipped actions

OpenWhisk supports actions in which there are multiple source files, zipped together. You can provide a “single file” action in this way, as long as its suffix is .zip. Since the .zip suffix does not convey the kind of runtime required, you form the name using two dots. Thus, the name can be used for a zipped action whose action name is hello and whose runtime kind is nodejs:default. You can also select a non-default runtime version (e.g. if Nimbella supports it.

When you make your own zipped actions, you will typically create the zips in a separate build step. As will be seen, there are alternatives that may be preferable depending on your overall needs.

Some language runtimes, e.g. Java, also accept specialized archives (e.g. .jar files) or may directly accept binary executables. Where this is indicated by the extension, it the extension will still imply the language runtime (as in the Java case). Other cases are not specially handled by nim and might requiring using configuration.

Multi-file actions (with “auto-zip”)

An alternative to making your own zipped actions is presented in this section. Let’s alter example 1 a little bit.

> mkdir -p example2/packages/demo/hello
> cp helloMain.js helloAux.js example2/packages/demo/hello
> nim project deploy example2

Result of deploying project '.../example2'
  to namespace '...'
  on host ''
Deployed actions:
  - demo/hello

As the example shows, an action can be a directory instead of a single file. The action will be named for the directory. The files in the directory are then zipped automatically to form the action. For this to work in a ‘no-configuration’ project, at least one file must have a suffix from which the runtime kind can be inferred and there may not be multiple suffixes suggesting different runtime kinds. In addition, exactly one file must contain an identifiable main entry point as required by the particular runtime selected. These limitations can be relaxed by using configuration.

Subdirectories can be present under an action directory (e.g. node_modules). These will be zipped up with everything else.

You can optionally limit the files to be zipped in one of two ways. A file called .include can list exactly the items to be included and anything else in the action directory will be excluded. Wildcards are not permitted in this file but entries can denote directories as well as files. The .include file can also be used for linking, as described below.

Alternatively, you can have a file called .ignore stating which files and directories not to include. The .ignore file follows the same rules as .gitignore and should have the same effect. It is not necessary to list .ignore inside itself (it is automatically ignored, as are certain build-related files). You cannot have both .include and .ignore.

No actual zipping occurs if:

The action is named for the directory but includes only the single file (and takes its runtime kind from the file’s suffix).

“Linking” action source from elsewhere in the filesystem

It is possible for the .include file to contain entries that denote files or directories outside the action directory. That is, entries can be absolute paths or relative paths containing ‘..’ (relative to the action directory). These paths can terminate inside or outside the project, but you might want to use caution in terminating them outside the project because it makes it harder to relocate the project as a whole. Recall that there can be arbitrary directories in the root directory of the project, which becomes a good place to put “out of line” material.

Entries in .include are interpreted differently if they are absolute or contain ‘..’: the resulting entries in the zip file will start with the last segment of the listed path. That is, if you have ../../../actionSrc/node_modules, the contents of that directory will be zipped, but files inside the directory will have the form (e.g.) node_modules/<path>. Similarly, the file ../../../actionSrc/helpers.js becomes just helpers.js.

Deploying Incrementally

Consider the previous example whose output was

Result of deploying project '.../example1'
  to namespace '...'
  on host ''
Deployed actions:
  - admin/adduser
  - sampleJavaScript
  - samplePython
  - welcome
  - demo/hello
  - test/work0
  - test/work30

Now let’s suppose that you’ve changed demo/hello and welcome but not the others. You aren’t ready to do a production deployment or submit for testing, you just want to deploy the actual changes so you can continue developing. You do this using the --incremental flag.

> nim project deploy example1 --incremental

Result of deploying project '.../example1'
  to namespace '...'
  on host ''
Deployed actions:
  - welcome
  - demo/hello
Skipped 5 unchanged actions

The --incremental option skips the uploads of actions whose digests have not changed. Those digests are computed over the action’s contents and also its metadata (thus, when you change properties of an action using configuration, the change will be detected). The --incremental option also skips the re-zipping of large multi-file actions whose included contents are older than the last zip.

As will be seen, the --incremental option also applies to static web content.

Unless build steps are added, the incremental option will be accurate in determining what has changed. Once you add build steps, some heuristics come into play as discussed in a later section.

Project watching

A good way to exploit the --incremental option when developing is to use nim project watch.

 > nim project watch example1
/Users/joshuaauerbach/nimbella> nim project watch example1
Watching example1
Deploying 'example1' due to change in 'project.yml'

Result of deploying project '/Users/joshuaauerbach/nimbella/example1'
  on host ''
Skipped 7 unchanged actions
Deployment complete.  Resuming watch.

The ellipsis in the example isn’t part of the transcript, it represents a passage of time during which the project.yml of the project was changed in a way that did not effect the semantics of the action demo/hello (if it had, demo/hello would have been redeployed). The project watch command accepts a list of projects and most of the flags that project deploy accepts (an exception is --incremental, which is assumed). The command will run until interrupted (typically, one would devote a terminal window to it while working elsewhere, e.g., in your favorite IDE).

Adding Project Configuration

In the previous section we already mentioned limitations on what can be done in a ‘no-configuration’ project. These limitations can often be overcome by providing a configuration file called project.yml in the project’s root directory. This is coded in YAML.

The structure of the information in the config file should follow the structure of the project itself. That is

globalStuff: ...
  - name: pkg1
    pkg1modifier1: ...
    pkg1modifier2: ...
      - name: action1
        action1modifier1: ...
        action1modifier2: ...
  - name: pkg2

The project configuration is merged with what is inferred from file and directory names, so it is only necessary to put information in the configuration that cannot be inferred from file or directory names or for which the defaults aren’t what you want. Let’s suppose that in example1 of the previous section we did not want hello to be a web action and its main entry point could not be determined directly from the code. We would have specified the following in the configuration file.

  - name: demo
      - name: hello
        web: false
        main: myMain
          timeout: 10000

The action modifiers that can go in the configuration are as follows.

The web modifier has the same semantics as it has on nim action create or wsk action create, except for the default. The value ‘yes’ or ‘true’ produces a normal web action. The value ‘no’ or ‘false’ produces an action that is not a web action. The value ‘raw’ produces a raw HTTP web action. The default is ‘true’ if not specified. These behaviors are actually accomplished via annotations with reserved meanings that are merged with annotations provided by you.

The ‘webSecure’ modifier has the same semantics as --web-secure has on wsk action create (nim action create does not offer a similar flag). It generates the require-whisk-auth annotation according to whether you specify false (the default), a string value (the secret to use) or true (nim generates the secret for you).

  - name: demo
      - name: hello
          final: true
          sampleAction: true
          language: English

The keys and values of parameters and annotations are up to you, so the details are unimportant. The important thing is that both clauses are “nested maps” in YAML terms and can have as many keys and values as needed.

The clean modifier requires some explanation. The deployer installs actions using the update verb, meaning that there is some history maintained in the installed action. The version number will be incremented. Parameters and annotations from a previous incarnation will be retained unless changed. The code is always installed anew, however. The clean flag guarantees that the action is built only from the information in the project by erasing any old copy of the action before deploying the new one.

The package modifiers that can go in the configuration are as follows.

Note that clean at package level is not the same as specifying clean on each action of the package. At package level, the clean flag will remove all actions from the package before deploying, even ones that are not being deployed by the present project and will remove package parameters and annotations. The clean flag at package level is only appropriate when you want the project to “own” a particular package outright.

There are also some useful global members of the configuration.

The cleanNamespace global flag and the clean flags on actions and packages are ignored when --incremental is specified.

Two additional configuration members (bucket and actionWrapPackage) are documented in the web content chapter.

Symbolic Variables

The configuration can contain symbolic variables of the form ${SYMBOL} where SYMBOL is chosen by you. The substitutions for these variables are taken from the process environment or (optionally) from an “environment file”.

The environment file will typically take the form of a “properties file” (key value pairs as in the following example).


The environment file can also be JSON, as long as it contains a single object to be interpreted as a dictionary.

You can specify the environment file explicitly on the command line.

nim project deploy myProject --env test.env

If there is no --env option on the command line, but there is a file called .env located in the root of the project it will be used as the environment file.

Substitution is performed as follows

File Substitution

The configuration can also “inline” the contents of certain files, in certain places in the configuration. There are constraints on how this can be used as explained below. Where it is legal, you request file inclusion by using the < modifier in what otherwise looks like a symbolic variable, e.g. ${<.extraConfig}. In this case, you can provide any valid file system path (absolute or relative) providing it denotes a file. Files are relative to the project directory.

File substitution can only be used in places where the configuration would expect a “sub-dictionary” (a closed grouping of key value pairs under a specific heading like parameters, annotations, or bucket). By “closed” we mean that you can do the following.

parameters: ${<.parameters}

However, you can’t do the following.

parameters: ${<.parameters}
  anotherParameter: value

The file to be inlined must either contain JSON or be in the form of a “properties” file (key value pairs). In other words, it takes the same form as the “environment file” used in symbol substitution, but it need not be the same file (on the other hand, it may be the same file, if you find that convenient). If it is in the form of a properties file, it will be converted into a “shallow” dictionary (no nested sub-dictionaries) for the purpose of inclusion. With JSON you can escape this restriction and have nested structure. Note that the file is not interpreted as YAML.

Warning: all inclusions are processed before the resulting YAML is parsed. For this reason, errors can be obscure when you violate the restrictions.

Typical use cases would be to set parameters or annotations on an action or package or set the top-level parameters.

parameters: ${<.parameters}
annotations: ${<.annotations}

Adding static web content

You add static web content to a project by adding a directory called web which is a peer of the directory called packages. This directory should contain files whose suffixes imply well-known mime types for web content, such as .html, .css, .js (etc). Note that JavaScript files in static web content are not actions but are scripts intended to run in the browser.

The web directory can have subdirectories and can be built by web-site builders or other tools.

Like an action directory, the web directory may contain .include or .ignore to control what is actually considered web content (as opposed to build support or intermediate results). The web directory also supports integrated building, just like an action directory.

Let’s first look at a project with modest web content, populated by hand. The actions of the project are not shown, for simplicity.


Deploying the project, we see the following.

 > nim project deploy example3

Result of deploying project '.../example3'
  to namespace '...'
  on host ''
Deployed 4 web content items to
Deployed actions:

As the output shows, the contents of web were deployed to the web, with URLs within your namespace’s unique DNS domain <ns> The token <ns> will be replaced by the name of your namespace. The remaining portion of the domain name may differ from the typical depending on your API host within the Nimbella cloud. To access the content, either http or https may be used. For https, the SSL certificate will be that of Nimbella Corp.

When web content is deployed, entries are made in .nimbella/versions.json just as for actions and packages. Since web resources do not have version numbers, only the digests are stored. But, those digests are used, just they are for actions, to bypass the deployment of web resources that have not changed since the last deployment when the --incremental flag is specified.

When https://<ns> is used as a URL with no additional path component, a path of /index.html is assumed (which would not be convenient for example3). You can change this by adding a bucket member (top level) to project.yml. In a nested map under bucket you can specify several pieces of information. All of the entries in the following example are optional.

  prefixPath: "chatroom"
  clean: true
  mainPageSuffix: chatroom.html
  notFoundPage: "error.html"
  strip: 1

The prefix path is prepended to every URL path as resources are uploaded. For example, given the examples above, the resource runner.js would be deployed to https://<ns> If your web content does not require being placed at the root of the URL path space, this can allow web content from different projects to share a namespace and a hostname. Ensuring that namespace sharing works for your particular content is beyond the responsibility of the deployer (it does not rewrite URLs internal to your content).

Thw clean flag indicates whether old content should be deleted prior to deploying the new content. The default is false. The content to be deleted is everything under the prefixPath, if specified, or all previously deployed web content, otherwise. Note also that a top-level cleanNamespace: true designation will clear web content along with actions.

The mainPageSuffix is called a “suffix” because it affects what happens when any URL is used that denotes a directory rather than a file. This includes (but is not limited to) the case where there is no path segment in the URL at all. If you do not specify a mainPageSuffix the default is index.html. The deployer does not generate index.html nor any other file you name here: you must provide the file as part of the web content.

The notFoundPage nominates a web page to be used for a URL that does not match any content. The page designated here will be returned with every 404 (“not found”) error. If you do not specify a notFoundPage, the default is 404.html. Nimbella places its own 404.html at the root of every namespace and will preserve a file by that name when deleting web content from the namespace. You may overwrite the provided 404.html or leave it there and use a different name for your “not found” page (the latter approach allows you to revert to the Nimbella-provided one by removing the notFoundPage directive).

Both the mainPageSuffix and the notFoundPage are global to the namespace, so, if you do deploy multiple web folders into the same namespace using separate projects, either use the same values in all such projects or only specify them in one of the projects. It should be possible to obtain more than one namespace from Nimbella to deal with any conflicts that are otherwise hard to resolve.

The strip option is, in a sense, the opposite of prefixPath, in that it removes path segments rather than adding them. You can have both strip and prefixPath, to first remove existing segments, then add new ones. The strip option is mostly useful when you use a tool to generate web content. Chances are the tool will want to put its output in a specific directory. Consider an example with the following directory structure under web.


In web itself, in addition to .include, are some other files related to building (not shown). The public and src directories, between them, contain the source of a react web application. The build directory is generated by the react build and contains the entire content that you want to deploy. The .include file has simply


In project.yml you have

  strip: 1

The deployment should go like the following.

 > nim project deploy chat
Running './ in chat/web

Result of deploying project '.../chat'
  to namespace 'chatdemo'
  on host ''
Deployed 24 web content items to
Deployed actions:
  - chatadmin/create
  - chatroom/postMessage

Limitations, Requirements

We currently support the preferred form of deployment, as shown above, only when your API host is in the Google Cloud Platform (as will be the case of initial customeres). We intend to provide the service transparently across all of our supported clouds in the future.

For a web deployment to work correctly, the namespace entry in the credential store must include storage (look in the ‘Storage’ column of nim auth list). At present, the first namespace created for each user does include this member, but it is possible to create namespaces without it.

The “Action Wrapping” Alternative

If you are familiar with OpenWhisk web actions you may know the trick of converting a web page to a string constant that is then returned by a web action. The result appears to the user as if static content was served. The deployer will automate this idiom for you, as long as your web directory has no subdirectories. To employ this option, place a top level member actionWrapPackage in your project.yml and do not also provide a bucket member. For example, to place all of your web content in the package called demo:

 actionWrapPackage: demo

The package you designate may also contain actions, or not, as you wish.

Of course, the performance characteristics of this solution are not ideal for static content. However, your static content is simple enough, meets the “no subdirectories” restriction, and is incidental to a project consisting mostly of actions, action wrapping may be adequate. In terms of the current limitations mentioned in the previous section, action wrapping is available on all Nimbella-supported clouds and does not require a namespace whose credentials include storage member.

Incorporating build steps for actions and web content

The web directory, and also every directory that represents an action, can be built automatically as part of deployment. You can trigger this behavior in one of three ways.

  1. By placing a file called (for mac or Linux), or build.cmd (for windows), or both, in the directory. This file should contain a script to execute with the directory as current directory. If both forms are provided, only the one appropriate for the current operating system will be used. If only one is provided, the deployer will run on systems for which that kind of script is appropriate and indicate an error on other systems.
  2. By placing a file called .build in the directory. The rules for this option are explained under out-of-line builds below.
  3. By placing a package.json file in the directory. The presence of this file causes npm install --production (or yarn install --production) to be executed with the directory as current directory.

These triggers are examined in the above order, and, if one is found, the others are not considered by the deployer (of course, a script in build[.sh|.cmd] can always do its own npm install or yarn install).

Note that, build.cmd, and .build (but not package.json) are automatically ignored and do not have to be listed in .ignore.

Building precedes the determination of what files to upload (web) or zip into the action (action directories). This has two implications.

Errors in Builds

The deployer decides whether a build has failed based on examining the return code from a subprocess running the build. Thus, it is good practice to ensure that a build will set a non-zero return code on failure. When a build returns a zero, the deployer does not display its output. If it returns non-zero, all of its output (both on stdout and on stderr) are displayed.

If you suspect a build is not doing what you expect but there is no visible error, try rerunning nim project deploy with the --verbose-build flag. This causes all of the output of the build to display on the console, regardless of apparent success. This will often reveal errors in the build that are being swallowed because the build is returning zero despite the errors.

We’ve tried using other criteria, such as the presence of output on stderr but that does not work well in practice. Many utilities (most notably npm) write some of their output to stderr routinely.

Out-of-line builds and shared builds

There are three possibilities when using the .build directive (“out of line” building).

  1. If .build contains a single line giving a path name of a file, that file is taken to be a script and is executed with the directory containing .build as the current directory.
  2. If .build contains a single line giving the path name of a directory, that directory is made current and building is performed there, based on the presence of build[.sh|.cmd] (higher priority) or package.json. Recursive use of .build is not supported.
  3. If the directory of the previous case contains a marker file called .shared (contents ignored), the deployer will ensure that the build in that directory is only run once.
  4. If .build has more than one line, or is empty, or denotes a file or directory that does not exist, or denotes a directory not containing one of the recognized build directives, an error is indicated by the deployer.

Recall that it is possible to place arbitrary content in the root of project as long as it does not conflict with the reserved names web, packages, .nimbella, or project.yml. So, directories containing out-of-line building support can be placed there.

The effect of --incremental on Builds

Using the --incremental option has an effect on whether or not builds are executed.

Each action that has a build step can be either in a built or unbuilt state. Similarly, the web directory can be either built or unbuilt. If an action or web directory is unbuilt, the build is run as usual prior to determining if the content has changed. If the directory is built, the incremental deployment proceeds directly to change determination without re-running the build. The state is determined as follows.

In the script case, the convention of using a .built marker to suppress subsequent builds requires the script to set this marker when it executes. It’s a very coarse-grained heuristic, which we offer because (1) the deployer doesn’t know the dependencies of the build and (2) we want to err in the direction of efficiency when doing incremental deploying. You always have the remedy of running a full deploy. But, note that the use of this convention is optional. If the script does not create a .built marker, it will always run, which could be fine if the script does dependency analysis and rebuilds only what it needs to.

In the package.json case, what we do is also a heuristic and won’t be perfectly accurate if the package.json actually contains scripts that run as part of the install step. However, we believe it will work well in simple cases. Again, you always have the fallback of running a full deploy.

Examples of building (common use cases)

Let’s start with a simple node dependency.

Project example5 has a function in a single file but it has node/npm-style dependencies. Here is a part of the project layout


Let’s deploy that.

> nim project deploy example5
Running 'npm install' in example5/packages/demo/qrfunc

Result of deploying project 'example5'
  to namespace '...'
  on host ''
Deployed actions:
  - demo/qrfunc

Yes. That’s all that was needed. The presence of package.json triggered the npm install, after which the normal behavior for multi-file actions (autozipping) took over and created a zip file to upload that contained qr.js, package.json and the entire node_modules.

If you try this yourself bear in mind that the OpenWhisk runtime for nodejs requires either that package.json provide an accurate designation of the main file or else that the file be called index.js.

Now let’s consider the case where there are many actions, each with unique content, but only one node_modules.


The package.json in build specifies the common dependencies. The .shared file is empty (it is just a marker). Each .build directive looks like this.


Each .include file looks like this.


The regularity of this example shouldn’t be taken to mean that the individual actions cannot have more than one source file (they can easily do so). They do, however, need to share the same package.json if they want to use the common build. Because of that, they also need to use the same main file name (index.js in this case). If at any time you want some actions to have their own package.json, just add it and remove .build. Those actions then have their own dependencies (and their own npm install or yarn install) but then they are not using the common one any more. A project can have a mixture of actions that share builds and actions that don’t.

Another way to use a common build is to use a shell script and put logic there that completely populates the actions and web content (all of which simply point to it). It will execute exactly once, walk the directory structure of the project, and do whatever needs doing.

Managing Multiple Namespaces

There are a number of reasons why it may be useful to have multiple namespaces. A typical namespace is provisioned with two storage buckets (one for web content and one accessible to actions for use as a virtual file system), a redis instance, a DNS domain name for web content, and a set of OpenWhisk resources. While multiple applications can share a namespace, there are also good reasons to isolate them.

To obtain additional namespaces at this point in time it is necessary to contact Nimbella Support. Identify yourself as an existing developer and provide the email you used for signing up initially.

The way you are granted access to an additional namespace is identical to the way you activated your initial one. You receive a token via email and then you use

nim auth login  <long hexidecimal string provided by Nimbella>

This will add the additional namespace to your credential store and you will now be able to switch between namespaces. Initially, the newly added namespace is “current” (explained in more detail in the following).

The easiest way to manage multiple namespaces is to add the

targetNamespace: <namespace>

top level directive to the project.yml of each project and simply maintain the rule that each project is tied to a namespace. More complex development scenarios (where a single project may deploy to different namespaces, e.g. a test namespace and a production namespace) can be managed by using the --target directive of nim project deploy.

nim project deploy <projectPath>... --target <namespace>

If you have a targetNamespace in project.yml and also use the --target directive, the latter takes precedence. A value specified with --target is remembered, and will apply to subsequent deployments that do not use either targetNamespace or --target to specify a new target.

It is also possible to change the remembered target namespace without deploying anything.

nim auth switch <namespace>

If you use the wsk command in conjunction with nim, note that the file ~/.nimbella/wskprops (not ~/.wskprops) is updated on every switch of the target namespace via nim auth. You can connect your wsk this Nimbella-maintained property file using the WSK_CONFIG_FILE environment variable.

If you need the deployment of a project to have different characteristics depending on the target namespace (e.g. parameters that might differ between test and production), you might prefer to use symbolic substitution, e.g. targetNamespace: ${NAMESPACE}, and provide the value of NAMESPACE in an environment file along with other substitutions.

Usually, a Nimbella developer has just one API host and all namespaces use the same one. But, multiple API hosts can be accommodated as well.

  1. If all of your namespaces have unique names, even though some are on different API hosts, the API host is automatically switched when you switch the namespace.
  2. If you happen to have identically named namespaces on different API hosts, then you must use the --apihost flag to disambiguate as in the following.
nim auth switch <namespace> --apihost <API host>
nim project deploy <projectPath>... --target <namespace> --apihost <API host>