Microsoft Visual Studio Code Preview and .NET core on Linux

當微軟也開始擁抱開放原始碼、自由軟體、Linux,身為資訊人是該有些省思。

.NET Core如微軟先前所說支援Linux了,甚至連FreeBSD的版本都有(開發中),而目前丟在GitHub上的版本可用的.NET Core API已經有全部的45%了,相信很快就能到八成以上,詳情可參考.NET blog:
.NET Announcements at Build 2015
http://blogs.msdn.com/b/dotnet/archive/2015/04/29/net-announcements-at-build-2015.aspx

另外一件大事是Microsoft Visual Studio CodeVisual Studio的 … 兄弟?或說是一個比較初期的opensource版本,據說是based on GitHubatom editor
看來以後需要寫程式的課終於可已有一統江湖的IDE了?

官網:
https://www.visualstudio.com/en-us/products/code-vs.aspx

MicrosoftVisualStudioCodePreviewWebsite2
MicrosoftVisualStudioCodePreviewWebsite我自己裝起來玩的畫面,期待可以變得跟我們認識的Visual Studio愈來愈像XD?
MicrosoftVisualStudioCodePreview

一些安裝完系統後要做的一些設定

ntp server:
現在比較少看到學校單位自己在架ntp server了,至於要設定ntp校時伺服器的原因,原因很簡單,系統上錯誤的時間會產生的問題不少,小則查log的時候讓時間參考依據降低、大則讓你失約、線上搶購搶不到、甚至連網站都不能上(因為SSL憑證是要看日期的),而ntp server設的不好結果就是時間誤差大、查詢費時、或查不到東西,之前有整理過列表,可以參考下面這篇 – 台灣合用的ntp server,Windows內建的設定是time.windows.com,就是屬於很不好用的那種 … 強烈建議換掉,在系統時間設定裡面有選項,參考話面如下:
windowsNtp

dns server:
公司企業或是學校單位基本上都還是會有自己的dns,若非品質或速度有嚴重問題,建議以local的為主就可以了,之前有做過測試,想要自己測試也可以參考這邊做法 – 用Google的Public DNS上網會變快?Google的DNS真的比較快嗎?之常用DNS測試,因為dns實際上花費時間還要加上query的時間,有些人直接用ping值來當作dns的速度參考其實不太恰當 … 另外之錢也整理過一些列表: – 台灣ISP常用DNS列表整理常用 Public DNS 清單整理 (IPv4)

Windows WSUS / Linux/FreeBSD mirror:
Linux或BSD的套件或source來源mirror是一定要的,不用多解釋 … 而Windows上也有一種機制叫做WSUS – Windows Server Update Services,在做的就是類似的事情,對於企業或是學校環境來講可以省下非常可觀的頻寬跟時間花費

Windows 系統還原停用:
很多人不知道Windows有這功能,知道也不太會用,如果是這樣不如把這功能關掉,因為在背景自動備份吃的資源不是很划算,而真的中毒或系統損毀通常靠這功能是救不回來的 …

至於系統更新還有Windows需要的方毒軟體應該不用多解釋了,剩下想到再補XD

Flush local DNS cache on a browser and local system

Browser(application) level:

For Google Chrome/Chromium, open the link below:
chrome://net-internals/#dns
and click “Clear host cache

For Firefox, open the link below:
about:config
then click “I’ll be careful, I promise!“,
and find network.dnsCacheExpiration, set its value to 0 (create one if it didn’t exist).
Now the cache should be flushed, and set network.dnsCacheExpiration back to 3600, or you the cache will not work at all.

Operating system level:

  • Windows:
    • ipconfig /flushdns
  • Linux:
    Depends on the dns service you are using: 
    • sudo systemd-resolve --flush-caches
    • sudo /etc/init.d/dns-clean restart
    • sudo /etc/init.d/nscd restart
    • sudo /etc/init.d/dnsmasq restart
    • sudo /etc/init.d/named restart
  • macOS (> v10.5):
    • dscacheutil -flushcache

Use multiple CPU thread/core to make tar compression faster

On many unix like systems, tar is a widely used tool to package and compress files, almost built-in in the all common Linux and BSD distribution, however, tar always spends a lot of time on file compression, because the programs itself doesn’t support multi-thread compressing, but fortunately, tar supports to use specified external program to compress file(s), which means we can use the programs support multi-thread compressing with higher speed!

From the tar manual (man tar), we can see:

-I, –use-compress-program PROG
filter through PROG (must accept -d)

With parameter -I or --use-compress-program, we can select the extermal compressor program we’d like to use.

The three tools for parallel compression I will use today, all can be easy installed via apt install under Debian/Ubuntu based GNU/Linux distributions, here are the commands and corresponding apt package name, please note that new versions of Ubuntu and Debian no longer have pxz package, but pixz can do the similar thing:

  • gz:   pigz
  • bz2: pbzip2
  • xz:   pxz, pixz

Originally commands to tar with compression will be look like:

  • gz:   tar -czf tarball.tgz files
  • bz2: tar -cjf tarball.tbz files
  • xz:   tar -cJf tarball.txz files

The multi-thread version:

  • gz:   tar -I pigz -cf tarball.tgz files
  • bz2: tar -I pbzip2 -cf tarball.tbz files
  • xz:   tar -I pixz -cf tarball.txz files
  • xz:   tar -I pxz -cf tarball.txz files

I am going to use Linux kernel v3.18.6 as compression example, threw the whole directory on the ramdisk to compress them, and then compare the difference!
(PS: CPU is Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz, 4 cores, 4 threads, 16GB ram)

Result comparison:

tarCompressComparison1

Time spent:
.                                  gzip         bzip2                    xz
Single-thread       17.466s     50.004s       3m54.735s
Multi-thread           4.623s      13.818s       1m10.181s
How faster ?          3.78x          3.62x                3.34x

Because I didn’t specify the compressor parameter, just let them use the default compress level, so the result file size may be a little bit different, but quite close, we still can add parameters to the external compression project like this: tar -I "pixz -9" -cf tarball.txz files, just quote the command with its argument, which is also pretty easy.

tarCompressComparison2

With parameter -9 to increase the compress level, it might need more memory when compressing, the result will become 81020940 bytes but not 84479960 bytes, so we can save additional 3.3 mega bytes! (also spent 40 more secs, you decide it!)

This is very useful for me!!!