stop for now

This commit is contained in:
Tommy Parnell
2019-01-21 15:02:56 -05:00
parent fd5c668820
commit 3143f1c76b
44 changed files with 244 additions and 25 deletions

View File

@@ -0,0 +1,21 @@
title: Hosting your blog on the cheap
date: 2018-08-22 04:49:46
tags:
- cloud
---
A load of people have been asking me lately how I host my blog. Incase its not apparent, I make 0 dollars on this blog. I refuse to place ads on the page, just to gain pennies of revenue. I do this, not because I don't feel like I shouldn't get paid, but simply because I find ads to be disruptive to the reader. At the end of the day, blogs should have a high signal to noise ratio.
<!-- more -->
Since I make no money, on this my strategy is about cutting costs. My grandfather use to say "take care of the pounds, because the pennies will take care of themselves." Now since my grandfather is in England, and their dollar is known as the pound, he was telling me to focus even on the small amount of currency.
The first big decision for blogs is what "engine" you are going to use, or if you are going to make your own. These usually fall into 2 categories. Static sites, which are usually when blogs are written in text files, and are compiled into static html, or server rendered blogs such as wordpress. When a request is made to blog that has server rendering, the html is dynamically built in time and delivered to the consumer. Static sites, on the other hand are precomputed and thus are just delivered to the browser.
I won't go into the details on what is better for different scenarios. If you are being cheap, then you will want to use static sites. Static sites are precomputed, which essentially means you just need to serve files to the user. There is no dynamic server to host, you won't need a database, etc. There are a few I like. This blog is ran off [Hexo](https://hexo.io)
<!-- So I know what you are thinking, static sites are just 'better' for page load time. While this is true, they can lack dynamic features that might be important to you, such as adding new blog posts on a schedule, or limiting ip addresses, or even some kind of login/subscription model. -->

View File

@@ -0,0 +1,13 @@
title: Hosting your webapp on the cheap
date: 2018-08-22 05:11:20
tags:
- cloud
---
So many people have asked me how I've hosted apps in the past. There is a bit of an art at the moment to making your apps extremely cheap in the cloud. I've heard of hosting costs cut from thousands to pennies.
<!-- more -->
## Hosting

View File

@@ -0,0 +1,18 @@
title: How I minify images for my blog
tags:
- javascript
- tools
---
Ok so I'm really lazy, and I honestly think that has helped me a lot in this industry. I always try to work smarter, not harder. I take many screen shots for this blog, and I need to optimize them. Incase you didn't know many images are often larger than they need to be slowing the download time. However, I don't ever want to load them into photoshop. Too much time and effort!
<!-- more -->
At first I tried to compress images locally, but it took to long to run through all the images I had. So recently I started using a service called [tiny png](https://tinypng.com/) to compress images. Now the website seems to indicate that you upload images, and you will get back optimized versions. However to me this takes too much time. I don't want the hassle of zipping my images uploading them, downloading the results. Again, lazy!
So I figured out they have a cli in npm. Easy to install, just use npm to globally install it. `npm install -g tinypng-cli`.
Now you have to call the cli, this is the flags I use `tinypng . -r -k YourKeyHere`. The period tells tinypng to look in the current directory for images, `-r` tells it to look recursively, or essentially to look through child directories as well, and the `-k YourKeyHere` is the key you get by logging in. On the free plan you get 500 compressions a month. Hopefully you will fall into the pit of success like I did!
![an image showing the tiny png results](1.png)

View File

@@ -0,0 +1,3 @@
title: 'I used ask.com for 30 days, and this is what I learned'
tags:
---

View File

@@ -0,0 +1,3 @@
title: Migrating from azure web app to containers
tags:
---

View File

@@ -0,0 +1,3 @@
title: Precompiling razor views in dotnet core
tags:
---

View File

@@ -0,0 +1,3 @@
title: Securing your dotnet core apps with hardhat
tags:
---

View File

@@ -0,0 +1,16 @@
title: The ultimate chaos monkey. When your cloud provider goes down!
date: 2017-03-13 15:20:14
tags:
- amazon
- aws
- cloud
- DevOps
---
A few weeks ago, the internet delt with the fallout that was [the aws outage](https://techcrunch.com/2017/02/28/amazon-aws-s3-outage-is-breaking-things-for-a-lot-of-websites-and-apps/). AWS, or Amazon Web Services is amazon's cloud platform, and the most popular one to use. There are other platforms similar in scope such as Microsoft's Azure. Amazon had an S3 outage, that ultimately caused other services to fail in the most popular, and oldest region they own. The region dubbed `us-east-1` which is in Virgina.
This was one of the largest cloud outages we have seen, and users of the cloud found out first hand that the cloud is imperfect. In short when you are using the cloud, you are using services, and infrastructure developed by human beings. However most people turn to tools such as cloud vendors, since the scope of their applications do not, and should not include management of large infrastructure.
The Netflix, and amazon's of the world are large. Really large, and total avalibility is not just a prefered option, but a basic requirement. Companies that are huge users of the cloud, have started to think about region level depenencies. In short, for huge companies, being in one region is perilous, and frought with danger.
Infact this isn't the first time we have heard such things. In 2013 Netflix published [an article](http://techblog.netflix.com/2013/05/denominating-multi-region-sites.html) describing how they run in multiple regions. There is an obvious cost in making something work multi-region. This is pretty much for the large companies, however if you are a multi billion dollar organization, working multi-region would probably be an awesome idea.

View File

View File

@@ -34,28 +34,27 @@ namespace TerribleDev.Blog.Web
}
public IPost ParsePost(string postText, string fileName)
{
var splitFile = postText.Split("---");
var ymlRaw = splitFile[0];
var markdownText = string.Join("", splitFile.Skip(1));
var pipeline = new MarkdownPipelineBuilder().UseEmojiAndSmiley().Build();
var postContent = Markdown.ToHtml(markdownText, pipeline);
var postContentPlain = String.Join("", Markdown.ToPlainText(markdownText, pipeline).Split("<!-- more -->"));
var postSettings = ParseYaml(ymlRaw);
var resolvedUrl = !string.IsNullOrWhiteSpace(postSettings.permalink) ? postSettings.permalink : fileName.Split('.')[0].Replace(' ', '-').WithoutSpecialCharacters();
var summary = postContent.Split("<!-- more -->")[0];
var postSummaryPlain = postContentPlain.Split("<!-- more -->")[0];
return new Post()
{
PublishDate = postSettings.date,
tags = postSettings.tags?.Select(a=>a.Replace(' ', '-').WithoutSpecialCharacters().ToLower()).ToList() ?? new List<string>(),
Title = postSettings.title,
Url = resolvedUrl,
Content = new HtmlString(postContent),
Summary = new HtmlString(summary),
SummaryPlain = postSummaryPlain,
ContentPlain = postContentPlain
};
var splitFile = postText.Split("---");
var ymlRaw = splitFile[0];
var markdownText = string.Join("", splitFile.Skip(1));
var pipeline = new MarkdownPipelineBuilder().UseEmojiAndSmiley().Build();
var postContent = Markdown.ToHtml(markdownText, pipeline);
var postContentPlain = String.Join("", Markdown.ToPlainText(markdownText, pipeline).Split("<!-- more -->"));
var postSettings = ParseYaml(ymlRaw);
var resolvedUrl = !string.IsNullOrWhiteSpace(postSettings.permalink) ? postSettings.permalink : fileName.Split('.')[0].Replace(' ', '-').WithoutSpecialCharacters();
var summary = postContent.Split("<!-- more -->")[0];
var postSummaryPlain = postContentPlain.Split("<!-- more -->")[0];
return new Post()
{
PublishDate = postSettings.date,
tags = postSettings.tags?.Select(a => a.Replace(' ', '-').WithoutSpecialCharacters().ToLower()).ToList() ?? new List<string>(),
Title = postSettings.title,
Url = resolvedUrl,
Content = new HtmlString(postContent),
Summary = new HtmlString(summary),
SummaryPlain = postSummaryPlain,
ContentPlain = postContentPlain
};
}
}
}

View File

@@ -3,6 +3,7 @@ permalink: anti-forgery-tokens-in-nancyfx-with-razor
id: 33
updated: '2014-06-11 20:00:34'
date: 2014-06-11 19:34:13
tags:
---
Getting started with anti-forgery tokens in NancyFX with razor views is pretty simple.

View File

@@ -3,6 +3,7 @@ permalink: fixing-could-not-load-file-or-assembly-microsoft-dnx-host-clr-2
id: 53
updated: '2015-09-09 17:34:41'
date: 2015-09-09 10:08:18
tags:
---
So I recently ran into this error where the latest bits could not load Microsoft.Dnx.Host.Clr here is what I did to fix it.

View File

@@ -0,0 +1,80 @@
title: Rebuilding this blog for performance
date: 2019-01-21 17:56:34
tags:
- performance
- battle of the bulge
- javascript
- dotnet
---
So many people know me as a very performance focused engineer, and as someone that cares about perf I've always been a bit embarrassed about this blog. In actual fact this blog as it sits now is **fast** by most people's standards. I got a new job in July, and well I work with an [absolute mad lad](https://twitter.com/markuskobler) that is making me feel pretty embarrassed with his 900ms page load times. So I've decided to build my own blog engine, and compete against him.
<!-- more -->
## Approach
Ok, so I want a really fast blog, but one that does not sacrifice design. I plan to pre-compute the HTML into memory, but I am not going to serve static files. In this case, I'll need an application server. I'm going to have my own CSS styles, but I'm hoping to be in the (almost) no-JS camp. Not that I dislike JS, but I want to do as much pre-computing as possible, and I don't want to slow the page down with compute in the client.
## Features
So this blog has a view to read a post. A home page with links to the last 10 blog posts and a pager to go back further in time. A page listing blogs by tags and links for each tag to posts.
## Picking Technologies
So in the past my big philosophy has been that most programming languages and technologies really don't matter for most applications. In fact this use-case *could* and probably should be one of them, but when you go to extremes that I go, you want to look at benchmarks. [Tech empower](https://www.techempower.com/benchmarks/) does benchmarks of top programming languages and frameworks. For my blog since it will be mostly be bytes in bytes out, precomputed, we should look at the plain text benchmark. The top 10 webservers include go, java, rust, c++, and C#. Now I know rust, go and C# pretty well. Since the rust, and go webservers listed in the benchmark were mostly things no one really uses, I decided to use dotnet. This is also for a bit of a laugh, because my competition hates dotnet, and I also have deep dotnet expertise I can leverage.
## Server-side approach
So as previously mentioned we'll be precomputing blog posts. I plan to compute the posts and hand them down to the views. If we use completely immutable data structures we'll prevent any locking that could slow down our app.
## ASPNET/Dotnet Gotchas
So dotnet is a managed language with a runtime. Microsoft has some [performance best practices](https://docs.microsoft.com/en-us/aspnet/core/performance/performance-best-practices?view=aspnetcore-2.2), but here are some of my thoughts.
* There is a tool called [cross gen](https://github.com/dotnet/coreclr/blob/master/Documentation/building/crossgen.md) which compiles dll's to native code.
* Dotnet's garbage collector is really good, but it struggles to collect long living objects. Our objects will need to either be ephemeral, or pinned in memory forever.
* The garbage collector struggles with large objects, especially large strings. We'll have to avoid large string allocations when possible.
* dotnet has reference types such as objects, classes, strings, and most other things are value types. [Value types are allocated](https://blog.terribledev.io/c-strings/) on the stack which is far cheaper than the heap
* Exceptions are expensive when thrown in dotnet. I'm going to always avoid hitting them.
* Cache all the things!
In the past we had to pre-compile razor views, but in 2.x of dotnet core, that is now built in. So one thing I don't have to worry about
## Client side page architecture and design
So here are my thoughts on the client side of things.
* Minify all the content
* Fingerprint all css/js content and set cache headers to maximum time
* Deliver everything with brotli compression
* Always use `Woff2` for fonts
* Avoid expensive css selectors
* `:nth child`
* `fixed`
* partial matching `[class^="wrap"]`
* Images
* Recompile all images in the build to `jpeg 2000, jpeg xr, and webp`
* Serve `jpeg 2000` to ios
* `jpeg XR` to ie11 and edge
* Send `webp` to everyone else
* PWA
* Use a service worker to cache assets
* Also use a service worker to prefetch blog posts
* Offline support
* CDN
* Use Cloudflare to deliver assets faster
* Cloudflare's argo improves geo-routing
* Throw 301's inside cloudflares own datacenters with workers
## Tools
These are the list of tools I'm using to measure performance.
* `lighthouse` - Built into chrome (its in the audit tab in the devtools), this displays a lot of performance and PWA improvements.
* [Web Hint](https://webhint.io/) is like a linter for your web pages. The tool provides a ton of improvements from accessibility to performance
* I really like [pingdom's](https://tools.pingdom.com/) page load time tool.
* Good ol' [web page test is also great](https://www.webpagetest.org/)
* The chrome devtools can also give you a breakdown as to what unused css you have on the page

View File

@@ -1,19 +1,23 @@
title: The battle of the buldge. Visualizing your javascript bundle
title: The battle of the bulge. Visualizing your javascript bundle
date: 2018-10-17 13:19:18
tags:
- javascript
- battle of the bulge
- performance
---
So incase you havn't been following me. I joined Cargurus in July. At cargurus we're currently working on our mobile web experience written in react, redux and reselect. As our implementation grew so did our time to first paint.
<!-- more -->
So I've been spending a lot of time working on our performance. One tool I have found invaluable in the quest for page perf mecca is [source-map-explorer](https://www.npmjs.com/package/source-map-explorer). This is a tool that dives into a bundled file, and its map. Then visualizes the bundle in a tree view. This view lets you easily understand exactly what is taking up space in the bundle. What I love about this tool is that it works with any type of bundled javascript file, and is completely seperate of the build. So any bugs in webpack where you have duplicate files in a bundle will appear here.
So I've been spending a lot of time working on our performance. One tool I have found invaluable in the quest for page perf mecca is [source-map-explorer](https://www.npmjs.com/package/source-map-explorer). This is a tool that dives into a bundled file, and its map. Then visualizes the bundle in a tree view. This view lets you easily understand exactly what is taking up space in the bundle. What I love about this tool is that it works with any type of bundled javascript file, and is completely de-void of any builds. So any bugs in your webpack config leading to duplicate files in a bundle will show up here.
## Getting started
You get started by `npm install -g source-map-explorer` then just download your bundles, and sourcemaps. In the command line run `source-map-explorer ./yourbundle.js ./yourbundlemap.js` Your browser should then open with a great tree view of what is inside your bundle. From here you can look to see what dependencies you have, and their sizes. Obviously, you can then decide to keep or throw them away.
You get started by `npm install -g source-map-explorer` then just download your bundles, and sourcemaps. You can do this from production if you have them. Otherwise build bundles locally. **Note** You should always use this on minified code where any tree shaking and dead code elimination has occurred. In the command line run `source-map-explorer ./yourbundle.js ./yourbundle.js.map` Your browser should then open with a great tree view of what is inside your bundle. From here you can look to see what dependencies you have, and their sizes. Obviously, you can then decide to keep or throw them away.
![an example visualization](1.png)
Here is a great youtube video explaining it in detail!

View File

@@ -0,0 +1,54 @@
title: 'Measuring, Visualizing and Debugging your React Redux Reselect performance bottlenecks'
date: 2019-01-14 22:04:56
tags:
- battle of the bulge
- javascript
- performance
---
In the battle of performance one tool constantly rains supreme, the all powerful profiler! In javascript land chrome has a pretty awesome profiler, but every-time I looked into our react perf issues I was always hit by a slow function called `anonymous function`
<!-- more -->
## Using the chrome profiler
So if you open the chrome devtools, you will see a tab called `performance`. Click on that tab. If you are looking into CPU bound workloads click the CPU dropdown and set yourself to 6x slowdown, which will emulate a device that is much slower.
![An image showing the chrome devtools](1.png)
Press the record button, click around on your page, then click the record button again. You are now hit with a timeline of your app, and what scripts were ran during this time.
So what I personally like to do is find orange bars that often make up the bulk of the time. However I've often noticed the bulk of bigger redux apps are taken up by `anonymous functions` or functions that essentially have no name. They often look like this `() => {}`. This is largely because they are inside of [reselect selectors](https://github.com/reduxjs/reselect). Incase you are unfamiliar selectors are functions that cache computations off the redux store. Back to the chrome profiler. One thing you can do it use the `window.performance` namespace to measure and record performance metrics into the browser. If you expand the `user timings section` in the chrome profiler you may find that react in dev mode has included some visualizations for how long components take to render.
![react user timings in chrome](3.png)
## Adding your own visualizations
So digging into other blog posts, I found posts showing how to [visualize your redux actions](https://medium.com/@vcarl/performance-profiling-a-redux-app-c85e67bf84ae) using the same performance API mechanisms react uses. That blog post uses redux middleware to add timings to actions. This narrowed down on our performance problems, but did not point out the exact selector that was slow. Clearly we had an action that was triggering an expensive state update, but the time was still spent in `anonymous function`. Thats when I had the idea to wrap reselect selector functions in a function that can append the timings. [This gist is what I came up with](https://gist.github.com/TerribleDev/db48b2c8e143f9364292161346877f93)
So how does this work exactly? Well its a library that wraps the function you pass to reselect that adds markers to the window to tell you how fast reselect selectors take to run. Combined with the previously mentioned blog post, you can now get timings in chrome's performance tool with selectors! You can also combine this with the [redux middleware](https://medium.com/@vcarl/performance-profiling-a-redux-app-c85e67bf84ae) I previously mentioned to get a deeper insight into how your app is performing
![a preview of selectors reporting their performance](2.png)
## So how do I use your gist?
You can copy the code into a file of your own. If you use reselect you probably have code that looks like the following.
```js
export const computeSomething = createSelector([getState], (state) => { /* compute projection */ });
```
You just need to replace the above with the following
```js
export const computeSomething = createMarkedSelector('computeSomething', [getState], (state) => { /* compute projection */ });
```
its pretty simple, it just requires you to pass a string in the first argument slot. That string will be the name used to write to the performance API, and will show up in the chrome profiler. Inside vscode you can even do a regex find and replace to add this string.
```
find: const(\s?)(\w*)(\s?)=(\s)createSelector\(
replace: const$1$2$3=$4createMarkedSelector('$2',
```

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 35 KiB

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 70 KiB

After

Width:  |  Height:  |  Size: 67 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 11 KiB

After

Width:  |  Height:  |  Size: 10 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 17 KiB

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 11 KiB

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 11 KiB

After

Width:  |  Height:  |  Size: 10 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 632 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 20 KiB

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.0 KiB

After

Width:  |  Height:  |  Size: 2.8 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 20 KiB

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.3 KiB

After

Width:  |  Height:  |  Size: 5.0 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 46 KiB

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 28 KiB

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 213 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 135 KiB

After

Width:  |  Height:  |  Size: 119 KiB