Comcast provides me with a Dynamic WAN IP Address, but I’d love to be able to e.g. VPN back into my home network without having to know the current IP.
This is where a dynamic DNS service comes in. After a bit of looking around, I picked Duck DNS. They seem to be one of those nice little “it’s free and works for now” sevices run by people as a hobby as opposed to all of the other free services that force you to log into the webinterface every week or to buy their premium plan.
To get your Duck DNS data:
This should give you the generated token for your account. Now hop on over to the DynDNS Webinterface of your EdgeOS Router. As of 1.7.0, these are the parameters you will have to enter to make it work:
Service: [custom -] duckdns
Hostname: thing in front of .duckdns.org (example.duckdns.org ==> example)
Login: nouser
Password: your generated password
Protocol: dyndns2
Server: www.duckdns.org
If you want to debug a bit on the commandline, you can run the following commands:
Show the current status:
$ show dns dynamic status
interface : eth0
ip address : 12.34.56.78
host-name : example
last update : Mon Sep 7 17:09:29 2015
update-status: good
Trigger an update
$ update dns dynamic interface eth0
Duckdns seems to be a decent free service. Especially for my use of dynamic DNS names which are really not all that important, I opted against moving a domain back over to namecheap and setting dyndns up on a personal domain or trying to use the cloudflare API for now.
Maybe on a rainy day :)
Some of these companies require NDAs for the interview process in case you run
into any secrets while you’re on site, so I won’t name any names or describe
any specific details.
This post is supposed to summarize the similarities between the companies and
to give an overview of what to expect during the tech interviewing process.
My experiences were particular to the type of position I got approached for. The position was very similar for all of them. Some companies call it “Production Engineer”, some call it “Site reliability engineer” (SRE). The idea is the same. It is the middle ground between a systems engineer and a software engineer. The positions requires in depth knowledge of 5 different areas:
You don’t have to know all of them, but should at least have a good knowledge of 2-3 and a basic understanding of all of them.
For the SRE position, these are not usually the brain teasers you can read about in the tabloids. It also didn’t consist of very data structure heavy acrobatics (I didn’t have to rebalance red-black tree or implement mergesort)
Most of the time people just want to see that you can develop reasonably
complex tooling and know the pitfalls that you encounter in production.
It usually starts out as a basic task (e.g. log parsing, file pruning, …) and then gets extended a bit (“What if this had to run continuously?”).
The gotchas are the usual things that you run into when working on an actual
system and not just sitting in a lecture about one.
It starts with escaping spaces and ends at multiline syslog messages and
“does this file fit into RAM?” kind of problems. Most of the time, you get to
pick your programming language of choice. I would usually suggest Ruby or
Python. Nobody wants to get stuck in weird IO interfaces or languages that don’t support strings natively ;)
Depending on the interviewer, you might end up having to do a little bit of string manipulation (find all palindromes, group by x, …), but since most of these string manipulations are relatively approachable, I rather enjoyed myself even though I would classify my remaining theoretical datastructure/algorithm knowledge as “could need some polish”.
The systems part of the interviews is usually targeted towards Linux.
It includes Filesystem knowledge (What are Inodes?), knowledge about the process lifecycle (What is fork+exec? How do signal handlers work? Thread vs Process?), Linux internals (What is load? Describe the boot process? How does dynamic linking work?).
These all require relatively in depth answers of more than a sentence.
The more in detail you can go the better.
At least for me, this wasn’t too much about Spanning Tree or BGP.
The networking interviews targeted more on the application side of things.
A lot of conversations about TCP (Nagle’s algorithm, TCP CORK, …), DNS (Glue Records, recursive resolvers, …), IP (CIDR), SSL, …
I was once even asked what my favorite protocol was. Luckily I had skimmed my thesis on anonymous filesharing on the flight over, so I had some talking points :)
A lot of the time, you will hear open ended questions (“You type a URL in your browser and hit enter, what happens?”) and can go down the stack to your heart’s desire :)
This part of the interview is the one that differs most between the companies.
It is probably also the hardest one to come up with as an interviewer.
It ranges from actual debugging of LAMP problems inside a VM to looking at alerts and prioritizing them, to looking at a 32 thread stacktrace and telling a
story of what happened.
Some of the interviewers are able to play a D&D style “dungeon master” role and
give you a hypothetical system on which a defect is manifesting itself. You then
have to describe your steps to zone in on the problem while the interviewer will
tell you the results of your queries (“I check for inodes using df -i” - “You see that you have a utilization of 30%”).
This is one of the interviews that is probably a big unknown to people who
have mostly dealt with smaller systems before.
The interview is usually an interactive whiteboarding session in which you have
to design a system that withstands a certain amount of requests.
The initial requirements are relatively tame and the interviewer will gradually
force your architecture to scale more and more. This is where you can bring in
your knowledge about load balancers, caching layers, consistent hashing and
sharding. Bonus points for fancy things like bloom filters of hyperloglog :)
It probably also doesn’t hurt to know some of the technology that has emerged from the company in question. Most of the tech companies have 1-2 open source projects that might be worth a look beforehand.
It seems like all of the big tech companies have agreed on a way to do interviews.
Initially, I got contacted by a recruiter. This seems to usually happen either via LinkedIn or eMail (maybe via a github profile?).
Most recruiters will usually talk a bit about the position, learn about your
experience and once they deem you a fit, will do a little pop quiz.
The pop quiz will consist of a set of 20'ish questions about all of the topics
mentioned above. Usually they can be answered with a single word or two.
(“What port does DNS run on?”, “What is saved on an inode?”, …)
Once thethe initial screening is over and was successful, there will be 3-4 phone interviews of about 45-60 minutes each. The interviews will go into one of the topics mentioned above. The coding will be done using a collaborative online editor. This editor can also be used to paste stacktraces and log entries for systems questions. You might want to brush up on what all of the letters in vmstat mean ;)
At the end of each interview, there are usually 10 minutes set aside for
questions.
This is a good time to ask about the day to day stuff that the engineers might
be able to answer a bit better than the recruiter.
Once all of these phone interviews are over and went reasonably well, the fun part starts: the on-site!
For me, this meant free flights from Boston to San Francisco! Not only did this
allow me to escape the winter, but it also allowed me to spend some time driving around SF and the valley.
I had never been before and was able to connect with some old friends and
colleagues.
Usually the companies cover the whole trip. From airport parking to a rental car, an allowance for food and hotel stays, it’s all taken care of.
Work is keeping me reasonably busy and I usually stay up to date by reading lots of blog-posts in my free time, so my only preparation for the interviews was the book Modern Operating Systems by Andrew Tanenbaum.
Besides a few google searches about interview questions and a look at Glassdoor, I think using the 5+ hour flight to read over the Tannenbaum book was probably the thing that helped me the most.
For the 3rd interview, I also spent some time reading Programming Pearls. Solving these kind of math heavy problems is nothing that comes naturally to me, but I think I got a bit of a better grasp about the problem space and how a different perspective can sometimes show up elegant solutions.
Honestly, the whole experience was highly entertaining and I learned a lot.
I didn’t actively look for a job, so I was able to come into those interviews
without any pressure on me.
It was nice to see how a well executed HR/Recruiting organization can work and
taking a peek inside all of these companies was really interesting.
I really enjoyed talking to the Engineers during the interviews and getting a
bit of a feeling for how the companies operate and what the people that make these giant infrastructures work do on a regular day.
As an added benefit, knowing one’s market value does help a lot on the professional development side of things.
]]>Hiermit bestätigen wir Ihnen den Eingang Ihrer Anfrage und können Ihnen ergänzend folgende Informationen hierzu geben:
Das Bundesministerium des Innern und die U.S.-Behörden haben eine Verknüpfung der jeweiligen nationalen Trusted Traveler Programme vereinbart. Auf U.S.-Seite handelt es sich um das System Global Entry und auf deutscher Seite um die Automatisierte Biometriegestützte Grenzkontrolle (ABG). Diese Kooperationsvereinbarung ermöglicht es, am Global Entry registrierten U.S.-Staatsangehörigen an der ABG und umgekehrt an der ABG registrierten deutschen Staatsangehörigen am Global Entry teilzunehmen (Vielfliegerprogramm). Die Aufnahme in das Global Entry System über die Onlineregisrierung GOES erfordert daher die vorherige Teilnahme / Registrierung an der ABG.
Reisende werden bei uns im Servicecenter der Bundespolizei somit zunächst im deutschen ABG-Programm (Retinascan) registriert. Dazu benötigen wir einen gültigen, maschinenlesbaren Reisepass (mit Chip). Die Registrierung ist derzeit ausschließlich am Frankfurter Flughafen möglich und dauert etwa 20 Minuten. Nach der Registrierung erhalten Sie die Zugangscodes für die Anmeldung auf www.globalentry.gov.
Das Servicecenter befindet sich im Terminal 1 Abflug A neben Eingang 1. Unsere Öffnungszeiten sind grundsätzlich Mo - So 07:00 - 21:00 Uhr; als Kernzeit Mo - So 08:30 - 17:30 Uhr. Wenn Sie einen Termin benötigen, setzen Sie sich telefonisch oder per eMail mit uns in Verbindung.
Die beigefügten Unterlagen dienen zur Information. Sie können ausgefüllt mitgeführt werden, ist aber nicht zwingend notwendig. Bitte sehen Sie davon ab uns die übersandten Unterlagen vorab per E-Mail zuzusenden.
Wenn Sie von uns die Zugangscodes erhalten haben, müssen Sie sich für alles weitere online unter www.globalentry.gov anmelden. Die Anmeldung ist kostenpflichtig und kostet 100,00 $. Nach etwa drei Wochen erhalten Sie einen Termin als Vorschlag zur persönlichen Vorstellung in den Vereinigten Staaten. Erst danach werden für Sie die Global-Entry-Kioske freigeschaltet sein.
Darüber hinaus können Sie sich auf den folgenden Internetseiten über EasyPASS, ABG+ und Global Entry informieren:
http://www.bundespolizei.de/DE/01Buergerservice/Automatisierte-Grenzkontrolle/ABG/abg_node.html
http://www.auswaertiges-amt.de/DE/Laenderinformationen/00-SiHi/UsaVereinigteStaatenSicherheit.html
www.globalentry.gov
Wir hoffen Ihnen mit diesen Informationen weitergeholfen zu haben und stehen Ihnen auch weiterhin bei Fragen gerne zur Verfügung.
Mit freundlichen Grüßen
So this one is about how we handled the OpenSSL Heartbleed Vulnerability at Acquia from a technical and a communication perspective.
The PDF version of the talk is available for download over here.
]]>While filling out a DS-2019 is necessary, it’s all pretty selfexplanatory. The first thing that I’d consider to be a bit out of the ordinary was paying the visa fee using the Roskos Meier Visasysytem. I don’t know how an insurance agency from Berlin got into the job of taking payments for all American consulates, especially considering that the 10$ interview application fee can be paid using a credit card, but I guess that’s a bit off-topic. One important thing to pay attention to is, that setting up an appointment on the internet will allow you to sign up for one about a week from the day you apply. This was at least the case for me in the ‘off season’ (read: H1B applicants are mostly processed by now). You WILL need to show the roskos meier printout when you’re arriving at the consulate. Be sure to check for any bank holidays and weekends between now and your chosen appointment. For me, the confirmation took 4 business days to arrive (+ 2 weekend days and 1 bank holiday). The email arrived at 08:23 in the morning, so it might be an automated system.
I decided to take the car because the VVS/SSB and Deutsche Bahn recently had all sorts
of problems bringing people to where they need to go on time.
As you can see on google street view,
there is plenty of free street parking available right next to the consulate.
I had an early-ish (9.15) appointment and was one of a handful of cars. The employees
seem to have their own parking spots, so I think the street spots won’t be all that
crowded thoughtout the day.
There is also a UBahn (U5) stop right down the street called “Gießener Straße”
which will take you directly to the Hauptbahnhof (15-20 minutes).
Seeing as I went past 3 major cities on my way to Frankfurt, I decided to add a little time buffer for slow traffic along the way. There was pretty much no traffic at all, so I arrived around 8.30. When asking the security guard upfront, he said that I can probably just stand in line, as long as I have SOME form or appointment, they’ll let me in. So if there isn’t too much of a line, you might as well try.
To end up in the final room that does the processing you will:
In case of a longer waiting period, there are snack machines (German drinks, German candy) and a bathroom available. They put a “dyson airblade” in the bathroom, so I finally understood where all those visa fees ended up at ;)
After giving them your passport and fingerprints at the first booth and having
them reconfirmed at the second one, you will be at the final step of your journey.
I didn’t need to give them my print outs of the passport picture. They said that
if the DS-2019 looks fine, they will just take the digital one.
After waiting for my number (the one you got at the very first step) to show up,
I was at the last step of the process, the interview stage.
As far as the interview questions go, the were pretty generic in my case.
I assume just to make sure I actually know who I am and what I’m doing.
From what I recall, they were something along these lines:
Seeing as both the lady behind the glass and the company I work for were from Massachusetts, I had a bit of smalltalk about how cold winters are in Boston (always a favorite) and how little sun we get around Germany’s latitude. It was all very pleasant and everyone seemed to be in a good mood. After you (hopefully) get the magic “Your visa is approved”, you can exit the building the way you came in.
Getting the visa in your passport means that they’ll take a full page of your little passport book and glue in a colorful piece of paper with your picture on it. It basically says how long the visa is valid for (5 years in my case) and adds additional notes (“must present approved I-797 or I-129S at POE”). You don’t have to be at home for DHL express, they will just put it in your mailbox.
That was pretty much it, hope I could help whoever was directed here by a google search :)
]]>1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 |
|
The nice part however is, that they provide an API that allows you to programmatically grab at least an overview of activity and sleep stats. It misses the actual timeframes (“you walked 1000 steps between 3 and 3,15”, “you woke up at 3, 3:45 and 5”) but it provides you with an ok summary (steps per day, fell asleep at time x, …).
I was a bit afraid that I’d have to screen scrape their site to get to my data, but not only didn’t I have to do that, I also didn’t have to deal with the oauth stuff myself either.
Zachery Moneypenny (“whazzmaster”) had already created a client library for their API and provided a bit of sample code.
Using that library I was able to whip up a quick incremental backup script that saves the activities and sleep data as something machine readable.
This is an example of what the activity data looks like:
$ cat 2012_09_10_activities.yaml
---
activities: []
goals:
activeScore: 1000
caloriesOut: 3092
distance: 8.05
floors: 10
steps: 10000
summary:
activeScore: 762
activityCalories: 1362
caloriesOut: 2885
distances:
- activity: total
distance: 6.43
- activity: tracker
distance: 6.43
- activity: loggedActivities
distance: 0
- activity: veryActive
distance: 2.25
- activity: moderatelyActive
distance: 3.36
- activity: lightlyActive
distance: 0.81
- activity: sedentaryActive
distance: 0
elevation: 9.14
fairlyActiveMinutes: 102
floors: 3
lightlyActiveMinutes: 139
marginalCalories: 913
sedentaryMinutes: 651
steps: 8114
veryActiveMinutes: 33
And the sleep data:
$ cat 2012_09_10_sleep.yaml
---
sleep:
- awakeningsCount: 8
duration: 30900000
efficiency: 97
isMainSleep: true
logId: 16236289
minutesAfterWakeup: 1
minutesAsleep: 471
minutesAwake: 16
minutesToFallAsleep: 27
startTime: '2012-09-10T00:56:00.000'
timeInBed: 515
summary:
totalMinutesAsleep: 471
totalSleepRecords: 1
totalTimeInBed: 515
You can find the script in my github repo. It’s not properly packaged and the Readme could use some polish, but this is more of a ‘scratching my own itch’ thingy that I thought might just save somebody 15 minutes.
]]>Moneyquote:
This problem has to do with the TLS SNI extension.
If curl sends an SNI hostname that the server does not recognize, the server will send back a TLS Alert record with Level 1 (Warning) and Code 112 (Unrecognized name) to notify the client that the server may not do what the client is expecting (that's what reason(1112) is referring to).
In the case of Apache, if your VirtualHost does not contain a ServerName or ServerAlias statement which explicitly matches the specified domain name, Apache will send back this TLS Alert.
And this means:
So it looks like openssl 0.9.8 will fail if it receives any TLS Alert records while waiting for the Server Hello record. openssl 1.0.0 has been updated to ignore TLS Alerts that are Warnings:
http://cvs.openssl.org/chngview?cn=14772
Original post:
I ran into a strange ‘bug’ that was a little bit more annoying to debug.
Apparently OpenSSL 1.x that is getting installed by brew doesn’t seem to be 100% compatible with Ruby, at least when you install it using RVM.
I ran into a reproducible problem when trying to connect to a salesforce sandbox account. This could be distilled down to this snipped:
1 2 3 4 5 6 7 8 |
|
Which resulted in this sad exception after 30 seconds or so:
Errno::ECONNRESET: Connection reset by peer - SSL_connect
from /Users/mseeger/.rvm/rubies/ruby-1.9.3-p125/lib/ruby/1.9.1/net/http.rb:799:in `connect'
from /Users/mseeger/.rvm/rubies/ruby-1.9.3-p125/lib/ruby/1.9.1/net/http.rb:799:in `block in connect'
from /Users/mseeger/.rvm/rubies/ruby-1.9.3-p125/lib/ruby/1.9.1/timeout.rb:54:in `timeout'
from /Users/mseeger/.rvm/rubies/ruby-1.9.3-p125/lib/ruby/1.9.1/timeout.rb:99:in `timeout'
from /Users/mseeger/.rvm/rubies/ruby-1.9.3-p125/lib/ruby/1.9.1/net/http.rb:799:in `connect'
from /Users/mseeger/.rvm/rubies/ruby-1.9.3-p125/lib/ruby/1.9.1/net/http.rb:755:in `do_start'
from /Users/mseeger/.rvm/rubies/ruby-1.9.3-p125/lib/ruby/1.9.1/net/http.rb:744:in `start'
from /Users/mseeger/.rvm/rubies/ruby-1.9.3-p125/lib/ruby/1.9.1/net/http.rb:1284:in `request'
from /Users/mseeger/.rvm/rubies/ruby-1.9.3-p125/lib/ruby/1.9.1/net/http.rb:1307:in `send_entity'
from /Users/mseeger/.rvm/rubies/ruby-1.9.3-p125/lib/ruby/1.9.1/net/http.rb:1096:in `post'
It turns out that RVM can also install openssl, and it decides to go for version 0.9.8:
$ rvm pkg install openssl
Fetching openssl-0.9.8t.tar.gz to /Users/mseeger/.rvm/archives
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 3690k 100 3690k 0 0 394k 0 0:00:09 0:00:09 --:--:-- 456k
Extracting openssl-0.9.8t.tar.gz to /Users/mseeger/.rvm/src
Configuring openssl in /Users/mseeger/.rvm/src/openssl-0.9.8t.
Compiling openssl in /Users/mseeger/.rvm/src/openssl-0.9.8t.
Installing openssl to /Users/mseeger/.rvm/usr
$
After that I just reinstalled Ruby and pointed it at the openssl version just to make extra sure:
rvm reinstall 1.9.3-p194 --with-openssl-dir=~/.rvm/usr
After that, the snipped ends up with a 404 as expected.
]]>The PDF version of the talk is available for download over here.
]]>To get arround this, I’ve used VPNs or specialized services in the past. While both of them work pretty decently, I’d rather not force ALL of my traffic over the VPN or pay for a service that I can’t use for much more than Hulu.
One solution would be to go down the VPN route and configure network access based on the user or group using iptables’s owner-match extension, but I honestly don’t like working with iptables and the extension isn’t necessarily available on all systems.
Thanks to @makefoo, I looked a bit more into available “socksification” tools. These tools basically hook TCP/IP kernel methods and redirect them over a SOCKS proxy on a per application level using LD_PRELOAD as far as I understood. This means that you can use them on a per-application level and the only thing you need to point them at is a SOCKS proxy. A SOCKS proxy like the one that SSH is able to provide with the -D flag. So the only real setup I have to do is establishing an SSH connection before I launch the app using something like:
ssh -ND 8765 -i /path/to/certificate user@myserver.com
Given that you set up a ‘user’ account on your server (.bin/false or /bin/nologin are your friends) and have public key authentication enabled, this will open a local socks proxy on port 8765 and is able to dynamically forward all requests ports through that connection. There are several tools that can do the actual redirection of the network requests. I personally have had good luck with ProxyChains. Other alternatives are TSocks or Dante.
Another big advantage is that this doesn’t need anything running on the remote server besides SSH. I suggest looking at current LowEndBox offers for a cheap VPS. I currently use the “Atlanta OpenVZ VPS - OVZ128” from Quickpacket which comes down to 15 USD a year. I had to ask them for another IP once because the first one wasn’t for some reason detected as being from the US, but besides that it worked great.
If you’re looking for solutions to proxy your Bittorrent traffic, I suggest using Deluge Torrent which supports SOCKS without the need for forced socksification. And at least for Germany, Oderland is a nice swedish VPS provider that has a VPS starting at 2-3 Euros/month.
Thanks to this, I can now do all of my backups straight over my regular connetion and specific programs will use the SSH encryted connection. With my AMD E-450 CPU, I can push 100 mbit/s transfer speed to the internet without a problem.
]]>The podcasting client was easy, I’ve already used instacast on my iPad, it syncs with iCloud and works perfectly for what I need.
This post deals with the second app, the running app.
When I was on Android, I usually used runkeeper. It worked fine and the only downside was that I had to go to the website to get the GPX file (and they tried to hide it pretty well). On iOS, the app seemed a bit… jumpy when it comes to filtering the GPS signal:
This wasn’t really what I was looking for.
I also tried a few other ones, but most of them where targeted towards getting you on some website and wanting you to buy some sort of premium subscription.
I looked for a paid app that isn’t free but offers good value, this is where I noticed Runmeter:
The company that develops it has a nice comparison online. While this has to be taken with a grain of salt, I think they did an ok job.
Things I like:
So far I am happy with the app, it’s actively being developed and the price of 4.99$ is ok for something that I use every day.
–> App store link
A PDF copy of the slides is available here.
]]>1 2 |
|
To solve this, you have to make sure that the file actually loads the test-unit 2.x GEM as opposed to using the included 1.x version from stdlib. You can do this by simply adding this line to your ruby file:
1
|
|
Fun times ahead :)
]]>I always used the alias (blog.marc-seeger.de/year/month/day/slug) as an identifier on disqus. The durpal module, by default, uses the drupal node ID (blog.marc-seeger.de/node/123) as the identifier.
It’s easy enough to make some small changes though. All of this is in the disqus.module file. Here is the original passage that sets the disqus URL:
1 2 3 4 5 |
|
Just change the ‘alias’ parameter to false to get your aliased path as the disqus URL:
1
|
|
This tells the url() function to NOT assume that the alias is already resolved.
The API for the url function describes it as “Whether the given path is a URL alias already.”
The second thing you have to change is the disqus identifier. Also in the disqus.module search for this:
1 2 |
|
and replace it by:
1 2 3 4 5 |
|
Worked for me :)
]]>iPlayer:
Hulu:
These services usually use your IP address to determine what country you’re from. An obvious solution would be to just run the whole traffic over an HTTP proxy that is standing inside of the country in question. The problem is that while browsers tend to honor HTTP proxy settings, the flash player will try a direct socket connection first. This could be circumvented by blocking ports, but that is one of the more annoying solutions to the problem.
Something interesting can be seen by looking at how the iPlayer does the geo locating. It will check the IP and, if successful, pass out the URL for the actual streaming video. This URL can be accessed from anywhere, the only problem is to somehow get at it.
While VPN solutions work, they usually will tunnel ALL of your traffic over the comparatively slow VPN connection, will require manual enabling/disabling, won’t work with e.g. the apple tv out of the box and are in general a pain to set up. If you want to go this way, I recommend taking a look at privateinternetaccess.com. I used to just grab cheap VPS systems from lowendbox.com, but seeing as privateinternetacces provides me with endpoints in 9+ countries (US for Hulu, NL for NFL Gamepass, UK for iPlayer, Switzerland for Zattoo, …) for 40$ a year, I’d rather just save myself the hassle.
As mentioned, there are however good reasons why one might not want to have all traffic being routed over a VPN connection. I recently came across a pretty interesting service that has a different approach to this problem. The service is Unblock US and they provide a DNS based solution to the whole ‘geolocation-check’ topic. After you signed up (free 1 week trail without payment details) you’ll have to use their servers as DNS servers.
What they will do is redirect all DNS requests to geo-location checks to their own IPs where e.g. a squid server will forward the connection with an IP address that matches the country in question (e.g. the US for Hulu and the UK for iPlayer). The advantages of this approach are:
Only the necessary traffic will run over the slow proxy. Most of the time, the real video will come directly to you via your regular internet connection
You can just put the DNS servers into your router and all of your devices (iPad, AppleTV, Laptops, … ) will be able to automagically use the geo-restricted services
While the service provider might redirect any website to their servers, they still can’t fake an SSL certificate, so anything important should still be safe (you hopefully ARE using SSL/TLS!)
In contrast to a VPN solution, this allows you to access services from more than just one country (e.g. Netflix from the US, iPlayer from the UK, TruTV from Germany, TF1 from France)
The price for the service isn’t too bad either. When prepaying for a year, it will come down a little bit less than 3 Euro/month. Monthly payments will be approximately 3.50 Euro/month.
A downside of this approach is, that they have to ‘whitelist’ services and figure out which URLs/Domains are responsible for the GeoIP checks. With a VPN, you can use ANY service within that country without further actions.
While I haven’t signed up with them so far, I’m seriously considering it once I have some free time on my hands to actually watch all the tv and movies.
]]>1 2 3 4 5 6 7 8 9 |
|
This is a bit of magic and even has an eval(), but it works…
]]>It took a while but here is a new mix on 8tracks.
Have fun :)
Initially, I wanted to use it as a no-hassle solution for OSX Time Machine backups. This worked perfectly and I can easily push 5-6 MB/s over Wifi to the drive. After being happy with this, I decided to see if there is anything else interesting going on with the drive.
Here are some findings:
- The MyBook Live admin console is done in PHP using the full blown cakePHP framework:
- The MyBook Live has a hidden page to enable SSH at “/UI/ssh”:
- It has an 800MHz CPU with 256 MB RAM:
1 2 3 4 5 6 7 8 9 10 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 |
|
1 2 |
|
1 2 3 4 5 6 7 |
|
1 2 3 4 |
|
Since we’re pretty good on available RAM, there is a bunch of fun stuff we can do (cough)
]]>