A delightful store for speciality spreads: Paul Rothe & Sons

Late last year, I wanted to find a jar of loganberry jelly as a Christmas present for my granddad. This turned out to be surprisingly difficult to find – it’s a rare flavour that isn’t in any of the supermarkets, and even Fortnum & Mason and Harrods both came up short. I did find it on Amazon, but something about buying jam on Amazon just feels wrong. Googling turned up an article about a place called *Paul Rothe & Sons*, a small family-run deli in Marylebone. I walked over on my lunch break, and found a cosy little shop – tables for eating sandwiches and drinks, and walls stacked with a huge variety of jams and spreads. I had a little trouble finding what I wanted, but one of the owners (Stephen Rothe) helped me find exactly what I wanted. Sadly I couldn’t stay for lunch (and it was so busy, I’m not sure I’d have found a seat!), but there was a very relaxed air among the tables. I did grab a sausage sandwich to go, and it was warm and tasty. Perfect for a cold December day! If you’re ever in London and need an unusual jam or a delicious sandwich, give it a look.
{% wide_image 2019/jams %}
A very small sample of the jams on offer. The gaps in the shelves are from the jars I'd just picked up -- a pair of loganberries, and one of gooseberry.

Notes on reading a UTF-8 encoded CSV in Python

Here’s a problem I solved today: I have a CSV file to parse which contained UTF-8 strings, and I want to parse it using Python. I want to do it in a way that works in both Python 2.7 and Python 3. This proved to be non-trivial, so this blog post is a quick brain dump of what I did, in the hope it’s useful to somebody else and/or my future self. ## Problem statement Consider the following minimal example of a CSV file: ```csv 1,alïce 2,bøb 3,cárol ``` We want to parse this into a list of lists: ```python [ [“1”, “alïce”], [“2”, “bøb”], [“3”, “cárol”], ] ``` ## Experiments The following code can read the file in Python 2.7; here we treat the file as a bag of bytes and only decode after the CSV parsing is done: ```python import csv with open(“example.csv”, “rb”) as csvfile: csvreader = csv.reader(csvfile, delimiter=”,”) for row in csvreader: row = [entry.decode(“utf8”) for entry in row] print(“: “.join(row)) ``` But if you run that code in Python 3, you get the following error: ``` Traceback (most recent call last): File “reader2.py”, line 6, in for row in csvreader: _csv.Error: iterator should return strings, not bytes (did you open the file in text mode?) ``` The following code can read the file in Python 3: ```python import csv with open("example.csv", encoding="utf8") as csvfile: csvreader = csv.reader(csvfile, delimiter=",") for row in csvreader: print(": ".join(row)) ``` But the `encoding` argument to `open()` is only in Python 3 or later, so you can't use this in Python 2. In theory this is backported as [`codecs.open()`](https://docs.python.org/3/library/codecs.html#codecs.open), but I get a different error if I use `codecs.open()` in this file with Python 2.7: ``` Traceback (most recent call last): File "reader3.py", line 7, in for row in csvreader: UnicodeEncodeError: 'ascii' codec can't encode character u'\xef' in position 4: ordinal not in range(128) ``` This feels like it should be possible using only the standard library, but it was becoming sufficiently complicated that I didn't want to bother. I considered defining these as two separate functions, and running: ```python import sys if sys.version_info[0] == 2: read_csv_python2() else: read_csv_python3() ``` but that felt a little icky, and would have been annoying for code coverage. Having two separate functions also introduces a source of bugs -- I might remember to update one function, but not the other. I found [csv23](https://pypi.org/project/csv23/) on PyPI, whose description sounded similar to what I wanted. The following snippet does what I want: ```python import csv23 with csv23.open_reader("example.csv") as csvreader: for row in csvreader: print(": ".join(row)) ``` This reads the CSV file as UTF-8 in both Python 2 and 3. Having a third-party library is mildly annoying, but it's easier than trying to write, test and maintain this functionality myself. ## tl;dr Python 2 only: ```python import csv with open("example.csv", "rb") as csvfile: csvreader = csv.reader(csvfile, delimiter=",") for row in csvreader: row = [entry.decode("utf8") for entry in row] print(": ".join(row)) ``` Python 3 only: ```python import csv with open("example.csv", encoding="utf8") as csvfile: csvreader = csv.reader(csvfile, delimiter=",") for row in csvreader: print(": ".join(row)) ``` Both Python 2 and 3: ```python import csv23 with csv23.open_reader("example.csv") as csvreader: for row in csvreader: print(": ".join(row)) ```

Iterating in fixed-size chunks in Python

Here’s a fairly common problem I have: I have an iterable, and I want to go through it in “chunks”. Rather than looking at every item of the sequence one-by-one, I want to process multiple elements at once. For example, when I’m using the [bulk APIs](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html) in Elasticsearch, I can index many document with a single API call, which is more efficient than making a new API call for every document. Here’s the sort of output I want: ```python for c in chunked_iterable(range(14), size=4): print(c) # (0, 1, 2, 3) # (4, 5, 6, 7) # (8, 9, 10, 11) # (12, 13) ``` I have two requirements which are often missed in Stack Overflow answers or other snippets I’ve found: * It has to work with generators, where you don’t know the length upfront, and you can’t slice to a particular point in the generator. e.g. iterating over files in a directory * I don’t want “filler” values at the end – if it doesn’t line up neatly on a boundary, I’d rather have a truncated chunk than extra values. So to save me having to find it again, this is what I usually use: ```python import itertools def chunked_iterable(iterable, size): it = iter(iterable) while True: chunk = tuple(itertools.islice(it, size)) if not chunk: break yield chunk ``` Most of the heavy lifting is done by [itertools.islice()](https://docs.python.org/3/library/itertools.html#itertools.islice); I call that repeatedly until it returns an empty sequence. The itertools module has lots of useful functions for this sort of thing. The `it = iter(iterable)` line may be non-obvious – this ensures that the value `it` is using the same iterator throughout. If you pass certain fixed iterables to islice(), it creates a new iterator each time – and then you only ever get the first handful of elements. For example, trying to call `chunked_iterable([1, 2, 3, 4, 5], size=2)` without this line would emit `[1, 2]` forever. I think it’s the difference between a *container* (for which `iter(…)` returns a new object each time) and an *iterator* (for which `iter(…)` returns itself). I forget the exact details, but I remember first reading about this in Brett Slatkin’s book [*Effective Python*](https://effectivepython.com).

Getting credentials for an assumed IAM Role

In AWS, everybody has a user account, and you can give each user very granular permissions. For example, you might allow some users complete access to your S3 buckets, databases and EC2 instances, while other users just have read-only permissions. Maybe you have another user who can only see billing information. These permissions are all managed by [AWS IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html). Sometimes you want to give somebody temporary permissions that aren’t part of their usual IAM profile – maybe for an unusual operation, or to let them access resources in a different AWS account. The mechanism for managing this is an [*IAM role*](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html). An IAM role is an identity with certain permissions and privileges that can be *assumed* by a user. When you assume a role, you get the associated permissions. For example, at work, the DNS entries for wellcomecollection.org are managed in a different AWS account to the one I usually work in – but I can assume a role that lets me edit the DNS config. If you’re using the AWS console, you can assume a role in the GUI – there’s a dropdown menu with a button for it: ![](/images/2018/iam_role_gui.png) If you’re using the SDK or the CLI, it can be a little trickier – so I wrote a script to help me. ## The “proper” approach According to [the AWS docs](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-role.html), you can define an IAM role as a profile in `~/.aws/config`. This example shows a role profile called `dns_editor_profile`. ``` [profile dns_editor_profile] role_arn = arn:aws:iam::123456789012:role/dns_editor source_profile = user1 ``` When I use this profile, the CLI automatically creates temporary credentials for the `dns_editor` role, and uses those during my session. When the credentials expire, it renews them. Seamless! This config is also supported in [the Python SDK](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html#assume-role-provider), and I’d guess it works with SDKs in other languages as well – but when I tried it with Terraform, it was struggling to find credentials. I don’t know if this is a gap in the Go SDK, or in Terraform’s use of it – either way, I needed an alternative. So rather than configuring credentials implicitly, I wrote a script to create them explicitly. ## Creating temporary AWS credentials for a role There are a couple of ways to pass AWS credentials to the SDK: as environment variables, with SDK-specific arguments, or with the shared credentials profile file in `~/.aws/credentials`. I store the credentials in the shared profile file because all the SDKs can use it, so my script has two steps: 1. Create a set of temporary credentials 2. Store them in `~/.aws/credentials` By keeping those as separate steps, it’s easier to change the storage later if, for example, I want to use environment variables. ### Create a set of temporary credentials AWS credentials are managed by AWS Security Token Service (STS). You get a set of temporary credentials by calling [the `assume_role()` API](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sts.html?highlight=sts#STS.Client.assume_role). Let’s suppose we already have the account ID (the 13-digit number in the role ARN above) and the role name. We can get some temporary credentials like so: ```python import boto3 def get_credentials(*, account_id, role_name): sts_client = boto3.client(“sts”) role_arn = f”arn:aws:iam::{account_id}:role/{role_name}” role_session_name = “…” resp = sts_client.assume_role( RoleArn=role_arn, RoleSessionName=role_session_name ) return resp[“Credentials”] ``` Here `RoleArn` is the ARN (AWS identifier) of the IAM role we want to assume, and `RoleSessionName` is an identifier for the session. If multiple people assume a role at the same time, we want to distinguish the different sessions. You can put any alphanumeric string there (no spaces, but a few punctuation characters). I use my IAM username and the details of the role I’m assuming, so it’s easy to understand in audit logs: ```python iam_client = boto3.client(“iam”) username = iam_client.get_user()[“User”][“UserName”] role_session_name = f”{username}@{role_name}.{account_id}” ``` We could also set the `DurationSeconds` parameter, which configures how long the credentials are valid for. It defaults to an hour, which is fine for my purposes – but you might want to change it if you have longer sessions, and don’t want to keep re-issuing credentials. Note that I’m using two Python 3 features here: [f-strings for interpolation](https://www.python.org/dev/peps/pep-0498/), which I find much cleaner, and the `*` in the argument list creates [keyword-only arguments](https://www.python.org/dev/peps/pep-3102/), to enforce clarity when this function is called. ### Store the credentials in ~/.aws/credentials The format of the credentials file is something like this: ``` [profile_name] aws_access_key_id=ABCDEFGHIJKLM1234567890 aws_secret_access_key=ABCDEFGHIJKLM1234567890 [another_profile] aws_access_key_id=ABCDEFGHIJKLM1234567890 aws_secret_access_key=ABCDEFGHIJKLM1234567890 aws_session_token=ABCDEFGHIJKLM1234567890 ``` Each section is a new AWS profile, and contains an access key, a secret key, and optionally a session token. That session token is tied to the `RoleSessionName` we gave when assuming the role. We could try to edit this file by hand – or easier, we could use the [configparser module](https://docs.python.org/3/library/configparser.html) in the Python standard library, which is meant for working with this type of file. First we have to load the existing credentials, then look for a profile with this name. If it’s present, we replace it; if not, we create it. Then we store the new credentials, and rewrite the file. Like so: ```python import configparser import os def update_credentials_file(*, profile_name, credentials): aws_dir = os.path.join(os.environ[“HOME”], “.aws”) credentials_path = os.path.join(aws_dir, “credentials”) config = configparser.ConfigParser() config.read(credentials_path) if profile_name not in config.sections(): config.add_section(profile_name) assert profile_name in config.sections() config[profile_name][“aws_access_key_id”] = credentials[“AccessKeyId”] config[profile_name][“aws_secret_access_key”] = credentials[“SecretAccessKey”] config[profile_name][“aws_session_token”] = credentials[“SessionToken”] config.write(open(credentials_path, “w”), space_around_delimiters=False) ``` Most of this is fairly standard use of the configparser library. The one item of note: I remove the spaces around delimiters, because when I tried leaving them in, boto3 got upset – I think it read the extra space as part of the credentials. ### Read command-line parameters Finally, we need to get some command-line parameters to tell us what the account ID and role name are, and optionally a profile name to store in `~/.aws/credentials`. Recently I’ve been trying [click](https://palletsprojects.com/p/click/) for command-line parameters, and I quite like it. Here’s the code: ```python import click @click.command() @click.option(“–account_id”, required=True) @click.option(“–role_name”, required=True) @click.option(“–profile_name”) def save_assumed_role_credentials(account_id, role_name, profile_name): if profile_name is None: profile_name = account_id credentials = get_credentials( account_id=account_id, role_name=role_name ) update_credentials_file(profile_name=profile_name, credentials=credentials) if __name__ == “__main__”: save_assumed_role_credentials() ``` This defines a command-line interface with `@click.command()`, then sets up two required command-line parameters – account ID and role name. The profile name is a third, optional parameter, and defaults to the account ID if you don’t supply one. These parameters are passed into the `save_assumed_role_credentials()` method, which calls the two helpers methods. Now I can call the script like so: ```console $ python issue_temporary_credentials.py –account_id=123456789012 –role_name=dns_editor –profile_name=dns_editor_profile ``` and it creates a set of credentials and writes them to `~/.aws/credentials`. To use this profile, I set the `AWS_PROFILE` variable: ```console $ AWS_PROFILE=dns_editor_profile aws s3 ls ``` and this command now runs with the credentials for that profile. ## tl;dr If you just want the code, here’s the final copy of the script: ```python # issue_temporary_credentials.py import configparser import os import sys import boto3 import click def get_credentials(*, account_id, role_name): iam_client = boto3.client(“iam”) sts_client = boto3.client(“sts”) username = iam_client.get_user()[“User”][“UserName”] role_arn = f”arn:aws:iam::{account_id}:role/{role_name}” role_session_name = f”{username}@{role_name}.{account_id}” resp = sts_client.assume_role( RoleArn=role_arn, RoleSessionName=role_session_name ) return resp[“Credentials”] def update_credentials_file(*, profile_name, credentials): aws_dir = os.path.join(os.environ[“HOME”], “.aws”) credentials_path = os.path.join(aws_dir, “credentials”) config = configparser.ConfigParser() config.read(credentials_path) if profile_name not in config.sections(): config.add_section(profile_name) assert profile_name in config.sections() config[profile_name][“aws_access_key_id”] = credentials[“AccessKeyId”] config[profile_name][“aws_secret_access_key”] = credentials[“SecretAccessKey”] config[profile_name][“aws_session_token”] = credentials[“SessionToken”] config.write(open(credentials_path, “w”), space_around_delimiters=False) @click.command() @click.option(“–account_id”, required=True) @click.option(“–role_name”, required=True) @click.option(“–profile_name”) def save_assumed_role_credentials(account_id, role_name, profile_name): if profile_name is None: profile_name = account_id credentials = get_credentials( account_id=account_id, role_name=role_name ) update_credentials_file(profile_name=profile_name, credentials=credentials) if __name__ == “__main__”: save_assumed_role_credentials() ```

A script for backing up Tumblr posts and likes

A few days ago, Tumblr announced some new content moderation policies that include the mass deletion of any posts deemed to contain “adult content”. (If you missed the news, the Verge has a [good summary](https://www.theverge.com/2018/12/3/18123752/tumblr-adult-content-porn-ban-date-explicit-changes-why-safe-mode).) If my dashboard and Twitter feed are anything to go by, this is bad news for Tumblr. Lots of people are getting [flagged for innocuous posts](http://the-earth-story.com/post/180769626996/flags), the timeline seems to be falling apart, and users are leaving in droves. The new policies don’t solve the site’s problems (porn bots, spam, child grooming, among others), but it hurts their marginalised users. For all its faults, Tumblr was home to communities of sex workers, queer kids, fan artists, people with disabilities – and it gave many of them a positive, encouraging, empowering online space. I’m sad that those are going away. Some people are leaving the site and deleting their posts as they go (rather than waiting for Tumblr do it for them). Totally understandable, but it leaves a hole in the Internet. A lot of that content just isn’t available anywhere else. In theory you can export your posts with the official export tool, but I’ve heard mixed reports of its usefulness – I suspect it’s clogged up as lots of people try to leave. In the meantime, I’ve posted [a couple of my scripts](https://github.com/alexwlchan/backup_tumblr) that I use for backing up my posts from Tumblr. It includes posts and likes, saves the full API responses, and optionally includes the media files (photos, videos, and so on). They’re a bit scrappy – not properly tested or documented – but content is already being deleted by Tumblr and others, so getting it out quickly seemed more useful. If you use Tumblr, you might want to give them a look.

Keeping track of my book recommendations

I have a text file where I write down every book recommendation I receive. It has three lists: 1. **Personal recommendations.** Anything recommended specifically to me – usually from somebody who knows my reading tastes, so there’s a good chance this will be a book I enjoy. 2. **General recommendations from friends.** Any recommendations from somebody I trust, but not specifically to me – for example, somebody tweeting “I really enjoyed this book and you should all read it”. I enjoy a lot of the same books as my friends, but this is a softer recommendation. I feel less obliged to follow up on these. 3. **Everything else.** Recommendations from retweets, strangers at parties, people I don’t know very well, stuff I saw while browsing in Waterstones, and so on. These are mostly valuable in the aggregate – a single recommendation for a book isn’t very useful, but knowing that six different people recommended it might be. I’ve tried slicing in other ways – fiction and non-fiction, by author, genre, date, and so on – but sorting by quality of recommendation is the one that I keep going back to. A lot of what I read comes from these lists. (I have similar lists for films and TV shows.) I’m not bound by the list, and a bit of spontaneity is helpful to avoid an echo chamber effect – but using it as a starting point means I know I’m likely to enjoy something before I pick it up. As a bonus, having this list means that if I read something and enjoy it, I get to go back and thank the person who gave me the recommendation. Sometimes it’s years later, but better late than never! [David](https://twitter.com/DRMacIver) has been thinking a lot about reading on Twitter recently, and I wrote about my system [in a reply](https://twitter.com/alexwlchan/status/1062275751859404800). This blog post is the expanded version of that thought.

My visit to the Aberdulais Falls

In September, I was in Cardiff to help organise [PyCon UK][pyconuk]. I had a huge amount of fun at the conference, but running a five-day event gets quite tiring. This year I took a few days of extra holiday, so I could unwind before returning home. Unfortunately [heavy storms][bronagh] kept me inside for several days, but I did venture out at the end of the week to the [Aberdulais Falls][nattrust]. Aberdulais is a village in south Wales, about 45 minutes drive from Cardiff, and a place with a long industrial history. It had easy supplies of coal and wood, and it sits atop a powerful river – the River Dulais. Today the former tin plate works are owned and managed by the National Trust, and I decided to go have a look. Etymology note: the word _Aberdulais_ is Welsh for _mouth of the river Dulais_. _Aber_ is a Celtic prefix that appears in [lots of place names][aber_name]. Well-known examples are places like Aberdeen and Aberystwyth. In 1584, German engineer Ulrich Frosse had developed a new way to smelt copper, but he wanted to keep his process safe from “pryinge eyes”. The Welsh countryside is nice and quiet, so he set up a smelting works in Aberdulais – the first of its kind in Wales. The copper ore was mined in Cornwall, the coal and charcoal supplied from nearby Neath, and a waterwheel on the river powered the site. (If you’re interested, I found [an 1880 lecture][lecture] that gives more detail about the history of smelting in Wales.)
A sixteenth-century woodcut of copper smelting. Taken from Wikimedia Commons; public domain.
The [National Trust site](https://www.nationaltrust.org.uk/aberdulais-tin-works-and-waterfall/features/aberdulais-an-industrial-revolution-since-1584) says the copper was used in coins minted for Queen Elizabeth I. I tried to find some pictures of the coins in question, but I couldn’t find not enough detail to pick them out. Based on dates, I think it would have been something like this [gold pound][six_pound], but that’s only a guess. Over time, Aberdulais became a site of different industries – textile milling, cloth production, even a flour mill – then in 1831, it became the site of a tinplate works. [Tin plating][tinning] is the process of coating thin sheets of iron or steel with tin, so they don’t rust. Tin plate is used for things like cooking utensils and canned food, and versions of it are still in use today. Wales had an incredibly successful tin plating industry – so much so that the US slapped [massive tariffs][tariffs] on it, and shut the whole thing down. The National Trust site is based in the remains of one of the old tin plating works. So what’s it like to visit?

Read more →

Finding SNS topics without any subscriptions

I make regular use of [Amazon SNS][sns] when working with message queues in AWS. SNS is a notification service. A user can send a notification to a *topic*. Each topic can have multiple *subscribers*, which receive a copy of every message sent to the topic – something like an HTTP endpoint, an email address, or an Amazon SQS queue. Sending a single notification can go to multiple places. A common use case is something like push notifications on your phone. For example, when a game sends you a notification to tell you about new content – that could be powered by SNS. We use SNS as an intermediary for SQS at work. Rather than sending a message directly to a queue, we send messages to an SNS topic, and the queue subscribes to the topic. Usually there’s a 1-to-1 relationship between topics and queues, but SNS is useful if we ever want to do some debugging or monitoring. We can create a second subscription to the topic, get a copy of the messages, and inspect them without breaking the original queue. We’ve had a few bugs recently where the subscription between the SNS topic and SQS queue gets broken. When nothing subscribes to a topic, any notifications it receives are silently discarded – because there’s nowhere for them to be sent. I wanted a way to detect if this had happened – do we have any topics without any subscribers? You can see this information in the console, but it’s a little cumbersome. Anything more than a handful of topics becomes unwieldy, so I wrote a script. Normally I’d reach for Python, but I’m trying to learn some new languages, so I decided to write it in Go. I’ve only dabbled in Go, and this was a chance to write a useful program and test my Go skills. In this post, I’ll explain how I wrote the script. Even if the Go isn’t very idiomatic, I hope it’s a useful insight into how I write this sort of thing, and what I’m learning as a Go novice. [sns]: https://en.wikipedia.org/wiki/Amazon_Simple_Notification_Service

Read more →

How do you hide a coin for 400 years?

As part of an upcoming blog post, I’ve been trawling the Internet for information about Elizabethan coins. Mostly for curiosity, as I know very little about old coins. Something I’ve learnt: historical coins are *valuable*. Five-figure prices aren’t unheard of, and I found one coin selling [for nearly £100k](https://www.baldwin.co.uk/coins/great-britain/charles-i-triple-unite-1642.html):
A Triple Unite from the Oxford mint, a gold coin produced for Charles I in 1642. It was worth sixty shillings.
Which got me thinking: suppose you were an unscrupulous time traveller, and you wanted to make some extra cash. Going back in time, getting some coins, and then “discovering” them in the present day could be quite lucrative. But how do you do it in practice? You can’t just bring the coins straight to the present. They’d be in much better condition than a coin that actually waited for 400 years, and carbon dating would be thrown off. Your coins would be derided as fakes, or at least prompt some tricky questions. You need to hide them somewhere in the past, and retrieve them in the present. But where do you leave a seventeenth century coin so it’s still safe in 2018? It’s certainly possible, if expensive – the Tower of London have been looking after the same jewels for centuries. But I don’t think it’s trivial, at least not without attracting some attention. It’s very tricky if you don’t have outside help. And the further back you go, the harder it becomes – imagine trying to save not coins, but dinosaur bones. I wrote this as a late night musing, but while sleeping I realised it’s not just a theoretical problem. Nuclear power creates nuclear waste, and that waste has to go somewhere. Most plans involve putting it in a bunker, and sealing the bunker for at least 10,000 years – but how do you stop future humans exploring? How do you ensure [nobody opens your radioactive death bunker](https://www.damninteresting.com/this-place-is-not-a-place-of-honor/)? Suggestions on the back of a postcard.

How to set the clock on a Horstmann Electronic 7 water heater

The clocks went back last night, which means changing the clock on my appliances. One of my few remaining appliances that has a clock but no Internet connection is the timer on my boiler. It’s a new boiler (I moved in June), so I’ve never had to set the clock on it before. It turns out it selected the correct winter/summer setting itself, but it’s drifted by twenty minutes, so I decided to set it anyway. This is the timer on my boiler:
My boiler is in a utility cupboard next to my front door. When I turn on the hot water boost, I put the post-it note on the cupboard door, so I don’t leave it on when I go out.
The name on the bottom right says “Horstmann Electronic 7”, so I did the obvious thing and googled “set clock horstmann electronic 7 boiler”. It took me several minutes to find the answer – in the [user guide for the timer](https://www.electricity.gg/media/55497/Horstmann-User-Guide.pdf) – so for the sake of future!me and other Googlers, here are the instructions for setting the clock. Unless you have this exact appliance, you can stop reading.

Read more →