Typescript ambient module declarations

With ES2015 JavaScript got the concept of modules using the export and import keywords. Typescript supports this and and it all works well as long as you are writing your modules in TypeScript. But to use external libraries or code that is not written in TypeScript you need to have a type declaration. For all major libraries these already exists in the definitelytyped repository which can be queried via TypeSearch. But sometimes you’ll have to write one yourself.

I had some problems understanding how these declaration should be consumed and found the documentation a little bit tricky to understand which made me to write this.

Ambient modules

From TypeScript documentation:

We call declarations that don’t define an implementation “ambient”. Typically, these are defined in .d.ts files. If you’re familiar with C/C++, you can think of these as .h files

So to write type declarations for code that is already written in JavaScript we have to write an ambient module. As stated above these are almost always defined in a file ending with a .d.ts extension.

How can we define an ambient module ? There are two ways.

Global declarations

When using declare module to create an ambient module the naming of the .d.ts file doesn’t matter what’s important is that the file is included in the compilation.

declare module "my-module" {
    export const a: number;
    export function b(paramA: number): void;
}

When the file above is included in the compilation TypeScript will register that there’s a module named my-module which then can be imported

import { a, b } from "my-module"

There are different ways to include declaration files in the compilation

  1. Specify path in the typeRoots compilerOptions in tsconfig.All global declarations files under typeRoots will be automatically included. This can be controlled with the types property compilerOption where you can explicitly control which definitions should be automatically included
  2. Specify the files property in tsconfig so that the declaration file is included
  3. Use the tripple slash directive /// <reference path="..." />
  4. With the help of paths compilerOptions in tsconfig

From TypeScript documentation

Keep in mind that automatic inclusion is only important if you’re using files with global declarations (as opposed to files declared as modules). If you use an import “foo” statement, for instance, TypeScript may still look through node_modules & node_modules/@types folders to find the foo package.

Files declared as modules

Using top-level export and import module declarations the .d.ts does not need to be included in the compilation. The important thing here is that the file is named index.d.ts and resides in a folder named after the module which in this case is my-module

// index.d.ts
export const a: number;
export function b(paramA: number): void;

// In a file importing the library
import { a, b } from "my-module"

What TypeScript will do is by default to try and lookup my-module. It will try with a number of different steps looking for both code(ts files) and declarations(.d.ts files). One of the steps is to look for declaration files in node_modules/@types. It will look for a folder named like the imported module with an index.d.ts file looking like the one above.

Sometimes you don’t wan’t to publish declaration files to definitelytyped and have folder with custom type declarations and therefore inform TypeScript to look for declarations in other folder than node_modules/@types. This can be done with the help of compilerOption paths.

{
"baseUrl": ".",
"paths": {
"*": [
"custom-typings/*"
]
}
}

With this configuration in tsconfig.json TypeScript will look for code and declaration files in the custom-typings folder.

To verify where TypeScript is trying to resolve things you can run the compiler with the traceresolution flag

tsc --traceresolution

FreeNAS ZFS snapshot backup to Amazon S3

I’ve been looking for a way to backup my FreeNAS ZFS snapshots to an offsite location. I didn’t find much information how to do this so i had to come up with my own solution.

In this post I’m going to show you how to save your encrypted ZFS snapshots in Amazon S3. We’re going to use a FreeBSD jail together with GnuPG and s3cmd.

Adding a jail in FreeNAS

Go to the FreeNAS web ui and click Jails. Click add and choose name. If you click advanced here you can change the ip-adress for the jail(I wanted to use DHCP).

 

Adding an empty jail in Freenas
Adding an empty jail in FreeNAS

Click ok and FreeNAS will setup a new jail for you which takes a minute or two.

From now on we will have to work in the FreeNAS shell(SSH must be enabled under services in the Web UI).

To list all the jails running on your FreeNAS host we can run:

$ jls

Verify that the jail you created is listed.

To enter the jail run:

$ jexec your_jail_name
$ # Verify that your in the jail
$ hostname
backup

We’re going to need to install some packages. First we need GnuPG which we’ll use to encrypt our snapshots. Then we will need the s3cmd which is used for uploading our snapshots to Amazon s3.

$ pkg install security/gnupg
$ pkg install net/py-s3cmd

I’m going to use symmetric AES256 encryption with a passphrase file because i don’t want to store my data in the cloud unencrypted. So generate a random passphrase which you will need to store at multiple locations(not just inside the jail). Because if the passphrase is lost your backups will be worthless. The passphrase file needs to be accessible by the backupscript. I have placed my passphrase file in in the root directory.

$ echo "mypassphrase" > /root/snapshot-gpg-passphrase
$ chmod 400 /root/snapshot-gpg-passphrase

Next we’ll create a folder holding our current list of snapshots that we’re going to keep synced with S3.

$ mkdir /root/s3_sync_bucket
$ chmod 600 /root/s3_sync_bucket

We also need to configure s3cmd so run this and answer all questions:

$ s3cmd --configure

The backup script

This script should be run on the FreeNAS host. What is does:

  1. Creates a snapshot of the specified dataset
  2. Sends it to the backup jail where it’s encrypted and saved to file
  3. Removes the snapshot on the FreeNAS host
  4. Removes all snapshots older than 7 days
  5. Syncs the local s3 bucket directory with S3 using s3cmd

Create the script and run it manually or with crontab

$ touch /root/backup_script.sh
$ chmod 700 /root/backup_script.sh
$ /root/backup_script.sh my-pool/my-dataset

Edit the script to fit your needs.

Decrypting a backup

To decrypt a backup:

$ gpg --batch --decrypt --passphrase-file /root/pass-gpg < backup_file

ZFS replication from FreeNAS to ubuntu

Recently i’ve have set up an old computer to be used as NAS. I’m using FreeNAS which has some SMB shares and a jail running Nextcloud. The setup is running on a mirror boot device(2x USB drives) with one mirrored volume for data(2x 500GB leftover drives).

FreeNAS is built on top of FreeBSD and ZFS(OpenZFS). When looking at options for backing up the data on FreeNAS i started digging into ZFS with it’s snapshot and replication capabilities. In FreeBSD ZFS comes builtin but with linux because of licens issues you have to install ZFS manually.

Snapshots is a really nice feature of ZFS which is can be seen as point-in-time copy of a dataset. A snapshots diskpace is only the changes in the dataset that has happened which makes it disk space effective. Snapshots can easily be rollbacked, replicated to another machine or mounted at another path.

Of course things got out of hand i’ve struggled for some hours with the ZFS replication to a ubuntu host that i’ve prepared. It turns out that FreeBSD uses a new version of ZFS that makes the receiving side hang with 100% CPU usage as described here #5999. It’s funny because the OpenZFS initiative did a change to use feature flags instead of version numbers to make compatibility less of a problem. Anyhow in 0.7 of ZFS on linux this problem seems to have been fixed. So i tried to compile the sources but got lost somewhere after all those steps required. Then i found this PPA https://launchpad.net/~zfs-native/+archive/ubuntu/daily which only has package for trusty(14.04) so installed trusty and used the precompiled packages.

Installing ZFS on linux(0.7 RC on ubuntu trusty 14.04)

$ sudo add-apt-repository ppa:zfs-native/daily
$ sudo apt-get update
$ sudo apt-get install zfsutils-linux
# Then reboot

Create a simple lab ZFS pool with a file vdev(4GB)

$ dd if=/dev/zero of=example.img bs=1M count=4096
$ sudo zpool create pool-test /home/user/example.img
$ sudo zpool status

Replicate the snapshot

Take a snapshot
zfs snapshot mypool/dataset@snapshotname

To list all snapshots
zfs list -t snapshot

zfs send can be used to send a snapshot to standard output. Send to file with verbose logging.
zfs send -v mypool/dataset@snapshotname > snapshotfile

Send snapshot to another host with ssh(to overwrite existing datataset use -F on the receive command)
zfs send -v mypool/dataset@snapshotname | ssh myhost zfs receive -v myotherpool/newdataset

For the above to work ssh keys needs to be set up between hosts.

Login as root and Set up SSH keys on your FreeNAS host.
ssh-keygen -q -t rsa

Copy the contents of /root/.ssh/id_rsa.pub

On your ubuntu host

# if not already exists
$ mkdir /root/.ssh
# if not already exists
$ touch /root/authorized_keys
# paste the contents of your FreeNAS root id_rsa.pub here
# if not already exists
$ chmod -R 600 /root/.ssh

Then do a manual login from the FreeNAS host to add the ubuntu host to known_hosts.

Conclusion

As of know i probably will use FreeBSD as the receiving side instead or save the snapshots to file and upload to cloud storage.