2017年11月18日 星期六

飛碟 FT-1000BS ViewPower in OSX

剛買了這台 UPS "飛碟 FT-1000BS",在 High Sierra 裝完 ViewPower v2.14 SP1 後, tomcat web 介面可以開啟但似乎沒有從 USB 收到任何 UPS 的資料。

查了一下 osx 的 log, 發現 ViewPower 安裝程式會偷偷用 sudo 下三行 command 但是失敗 "incorrect password attempt ;"

/bin/cp -f -r /Applications/ViewPower2.15/UPSVendor.kext /System/Library/Extensions/

/bin/cp -f -r /Applications/ViewPower2.15/jre/libusb-1.0.0.dylib /System/Library/Extensions/

/bin/cp -f -r /Applications/ViewPower2.15/jre/ /Library/Java/Extensions

手動把 driver copy 到相關位置就好了,看來 ViewPower installer 是假設使用者會剛好用過 sudo 有 cache password? XDDD


2017年6月3日 星期六

universal build machine: holy build box

今天看到同事分享說有這種東西:
https://github.com/phusion/holy-build-box

一看之下驚為天人,我覺得這真的很有搞頭啊...

對於一個目前有 RedHat 6 / RedHat 7 / Ubuntu 12 / Ubuntu 14 / Ubuntu 16 / Debian 7 / Debian 8 / Amazon Linux / Oracle Linux 5 / Oracle Linux 6 / Cloud Linux 5 / Cloud Linux 6 / Cloud Linux 7 可能要去支援的 agent 程式而言,(BTW Debian 9 也要 release 了...),光是這些 build machine 的 maintain 和單純 build 的測試就可以讓整個 R&D 花費無數光陰在這些不一定有辦法全部一一 build 過的 platform 上面....

對於客戶的價值是什麼?只要下載 / 匯入 一個 agent 聽起來就很酷啊!


2017年5月28日 星期日

docker run cmd with pipe broken?

半年多前放在 github 裡的 build container Dockerfile 突然有同事拿去用,但我很多細節都忘光了XD

記得之前是用 bind mount 的方式將 git repository 掛進 container 裡面,然後再用 git archive HEAD | tar -x -C /tmp 將 git repository export 到 container 的 tmp 目錄再開始 build

不知道為什麼如果在 host 用一行的方式去執行 tar 會說 broken archive...

docker run -it -v $(pwd):/mnt/repo -w /mnt/repo mybuild:latest git archive HEAD | tar -x -C /tmp && cd /tmp && make

但是在 container 裡面執行 archive 再 pipe 給 tar 又都沒問題...

最後是在 Dockerfile 裡用 CMD 執行 default command,這樣也沒問題....

docker run -it -v $(pwd):/mnt/repo -w /mnt/repo mybuild:latest


2017年5月15日 星期一

Disable SMBv1 to avoid EternalBlue exploit in Windows 7

Disable SMBv1 server from windows 7:

Set-ItemProperty -Path "HKLM:\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters" SMB1 -Type DWORD -Value 0 -Force

Disable SMBv1 client from windows 7:
sc.exe config lanmanworkstation depend= bowser/mrxsmb20/nsi
sc.exe config mrxsmb10 start= disabled


References:

2017年5月7日 星期日

2017年4月27日 星期四

[MV] 我心中尚未崩壞的地方

有天發現自己竟然成了片場裡的小丑。 小丑幸運的走上了舞台還發現了光。 但光是短暫會熄滅的,只能不停的尋找更大更亮的光。 小丑找到越來越亮的光,直到被強光灼傷.... 受傷的小丑又找到了一種淡淡的光,卻還是熄滅了。 最後小丑發現其實即使沒有光,只要他想表演,哪裡就是他的舞台。 小丑是在追求光,還是在尋找舞台?

2017年4月19日 星期三

Google Cloud OnBoard 2017 Taipei

Google Cloud Platform

  • Per minute billing
  • Sustain pricing: 25% 自動提供折扣 (20% each 25% usage)
  • Compute Engine: customize CPU and memory (add more memory)
  • Committed discount (1 year or 3 year)
  • CloudNative use cases
  • Free trial 300 USD (1 year valid)

IAM

  • Google Account / Service Account / Google Groups / G suites accounts
  • Organization?

App Engine

  • Similar to AWS BeansTalk or AWS Container Service
  •  Cloud Shell / edit / preview (Very nice integration with browser!!)
  •  Standard environment / Flexible environment (provides ssh)
  •  PaaS, auto scale, container
  • Eclipse wizard integration

Cloud Datastore

  • Similar to AWS DynamoDB?
  • Encryption / Sharding / Replication
  • NoSQL 
  • Auto scaling

Billing

  • Free 28 instance hour? / cost calculator 

Cloud Storage

  • Similar with AWS S3 (bucket / region / storage type by access frequency)
  • < 5TB
  • BLOB
  • GB / per month (granular: minute)
  • Multi Regional 0.026, Regional Nearline(1 time / month)0.01, Coldline (1 time / year) 0.007

Bigtable

  • High loading read/write
  • Cloud Dataflow, Dataproc (Hadoop) integration
  • SunGard, Gmail, Google analytics

Cloud SQL

  • Similar with AWS RDS
  • MySQL 5.5 / 5.6, PostgreSQL (beta)
  • Cloud Spanner
  • Horizontally scalable
  • ACID and SQL queries, High Availability 

GKE: Container Engine

  • Kubernetes
  • Auto scaling / deployment modes (Blue/Green, Rolling Update)
  • kubctl scale / LB / expose …

Compute Engine

  • Similar to AWS EC2 but with additional customization and charging features...
  • Preemptible instance (AWS spot instance?)
  • Add template, group then add group to LB
  • Why keep mentioning pre-warm?

Google Stackdriver

  • Monitor / Trace / Logging / Report / Debugger
  • Fluentd

Lifecycle of Machine Learning model

  • Hosted TensorFlow service (!! AWS ML not provide offline SDK or framework for development)
  • Import / Export model (!! AWS ML not support this)
  • Fasten training time. (with GPU)
  • Data analyze -> clean up
  • Model might not fit the target (Asia 用餐時間 PM7 / 中東 PM9)
  • Linear Regression. Python Pandas, BQ/TnsorFlow => Predict Taxi demand from whether
  • Convolutional Neural Network => Handwriting Recognization

BigQuery

  • Datawarehouse for Analytics
  • Very interesting use case that SQL like query and see results on the fly (query duration)

Datalab

  • For data scientist
  • Very interesting use case!, Wiki style document / run python (panda) and plot chart
  • Average / RMSE 
  • Exploratory plot (whether and taxi trip count)
  • CNN => signature
  • 3 demos
  • Classification => drawing
  • Prediction => Whether and taxi trips
  • Convolution Neural Network => Handwriting reconization

Summary

  • Very similar with part of AWS services, but AWS has more complete coverage and use cases.
  • Machine Learning allow export model and based on open source TensorFlow framework
  • Billing is more flexible than AWS
  • Seems more emphasis on container use cases
  • Some special database storage, such as Cloud Spanner and BigTable, BigQuery....
  • The browser integration and UX is quite geek and interesting. (Datalab / BigQuery / Cloud Shell / Cloud Preview / In browser edit ...etc...)

References

2017年4月10日 星期一

AWS Machine Learning Workshop


Machine Learning Concepts

  • Apply AWS ML to problems you have existing samples of actual answers
  • For example, to predict if new email is spam or not, you need to collect examples of spam and non-spam.
  • Binary classification (true / false)
  • Is spam or not spam, churn, will customer accept campaign?
  • Multiclass classification (one of more than two outcomes)
  • Regression (numeric number)
  • Building a Machine Learning Application
  • Frame the core ML problems
  • Collect, clean and prepare data
  • Features from raw data
  • Feed to learning algorithm to build models
  • Use the model to generate predictions for new data

Linear Models

  • Leaning process computes one weight for each feature to form a model that can predict the target value
  • For example, estimated target = 0.2 + 5 * age + 0.00003 * income

Learning Algorithm

  • Learn the weights of the model
  • Loss function: penalty when estimate target provide by the model not equal exact result
  • Optimization technique: minimize the loss (Stochastic Gradient Descent), during each passes updates the feature weights one example at a time with the aim of approaching the optimal weight that minimize the loss.
  • For binary classification, Amazon ML uses logistic regression (logistic loss function + SGD).
  • For multiclass classification, Amazon ML uses multinomial logistic regression (multinomial logistic loss + SGD).
  • For regression, Amazon ML uses linear regression (squared loss function + SGD)

Evaluate Model Accuracy

  • 70% to build up model, 30% for evaluation
  • Binary classification, 0.5 almost same use random guessing

Workshop

  • Download samples from  http://bit.ly/john-2017ml-labdata, create a S3 bucket and upload 3 csv files into that S3 bucket.
  • churn_new.csv => create data source from s3 file link => create model => use custom receipt
  • With 3334 records has column "State,Account Length,Area Code,Phone,Intl Plan, VMail Plan, VMail Message, Day Mins,Day Calls,Day Charge,Eve Mins,Eve Calls,Eve Charge,Night Mins,Night Calls,Night Charge,Intl Mins,Intl Calls,Intl Charge,CustServ Calls,Churn?", once you import them into AWS ML, you will automatically have a model used to predict a customer will leave or continue subscription.
  • 70% of imported data will be used to build up model, 30% will be used to evaluate the accuracy of the model.
  • banking.csv => create data source from s3 file link => create model => use default 
  • banking-batch.csv => create batch prediction from model above 

Thoughts

  • This 3 hour workshop is easy and help you have basic understanding how to use AWS Machine Learning service to automatically create Model, evaluate Model and call API for prediction.
  • Prepare your data to CSV format and upload to S3, then rest of modeling part and evaluation result AWS will create for you.
  • There are also other sources for you to import real production data such as RDS / RedShift ...etc...
  • The visualization is easy for you to evaluate the model
  • There are APIs for you to do prediction based on your created models.
  • Batch prediction
  • Real time prediction
  • The hardest part is "How to prepare your data and feature from raw data?"
  • The AWS Machine Learning document is worth to read! You can have basic understanding of Machine Learning concepts and how AWS did internally.

References



2017年3月28日 星期二

why create at least two subnets in different AZs is a must have practice?

今天遇到了一個有趣的問題,在 AWS VPC 裡要 create RDS DB instance 遇到了一個無解的問題。

原本的 VPC 規劃時只有兩個 subnet,而且無法再加進其他 subnet。

VPC CIDR: 192.168.1.0/24
Subnet 1 CIDR: 192.168.1.0/25
Subnet 2 CIDR: 192.168.1.128/25

更悲劇的是,這兩個 subnet 都是屬於同一個 Availability Zone (AZ).....

雖然在 VPC 裡 create RDS instance 時可以選擇 "Multiple AZ deployment = No" 但接下來卻還是一定要 create DB subnet group.

而 DB subnet group 則是至少要有兩個 subnet 要在不同的 AZ,
我原本以為如果我選擇只要 single AZ deployment,那 create DB subnet group 應該就不是必要的,但看起來 AWS 是不允許的。去翻了一下 Working with an Amazon RDS DB Instance in a VPC 第一條真的寫了
"Your VPC must have at least one subnet in at least two of the Availability Zones in the region where you want to deploy your DB instance. A subnet is a segment of a VPC's IP address range that you can specify and that lets you group instances based on your security and operational needs."

雖然我不知道怎麼只有一個 subnet 但是可以有兩個 AZ,但就我的理解應該就是指,你的 VPC 下要有兩個以上的 subnet,然後至少有兩個 subnet 是屬於不同 AZ。

這個故事告訴我們:

  • 創建 VPC 時,可大,不可以小。雖然目前只有兩個 subnet 可以用,但是可以留一些空間讓以後還有機會加 subnet 進去。
  • VPC 裡的 subnet 最好手動指定在不同的 AZ。
不然要砍掉 VPC 或者 subnet 重建會是很麻煩的一件事...