Newest blog entries
A very large tablet type device recently caught my eyes in a local Future Shop. It ended up being the HP Envy x2. It's actually a so-called detachable, basically a giant tablet with an attachable keyboard cover. It has a 15.6" screen and runs Windows 8.1. After a few weeks of checking it out, I decided to buy it.
The first thing to notice is the CPU, an Intel M-5Y70, along with 8GB of RAM. It's a very fast machine, snappy to use and without much lag at all. Most Windows tablets have abysmal performance, and even many laptops will lag at simple tasks like scrolling through modern apps. This one feels very much like a PC. The full HD screen is also very nice, with a wide viewing angle and bright colors, typical of a high end tablet. The 500GB hard drive is also a rare thing to find in such a thin device.
On the topic of Windows 8, I've never been a fan, I still run Windows 7. I've given the start screen a try, but even for a fully touch device (I use it almost exclusively without the keyboard) I still ended up installing Start8 and ModernMix. I actually use a lot of modern apps since they work better with a touch screen, but I much prefer to always have the taskbar at the bottom to switch between apps rather than the clumsy start screen interface.
On the downside, the volume key on the side is very clunky, and I have to resort to the on-screen volume display to achieve any precise control. The device also came with the usual lot of crapware, but I got rid of most of them. And of course it only has the Intel on-board video chip. I certainly would not use this as a full desktop replacement personally, but for my uses which is watching videos, browsing Reddit, Twitter, some web sites and email, all of which is done through modern apps, it's a great device.
This weekend I've had to figure out how to properly install and give permissions for a CGI script to run on Windows IIS. Surprisingly enough there is little information about this out there, especially if you run something other than ASP.NET.
By default, any CGI runs under a very restricted user called IUSR. This user can't write to files, access the Registry, or do basically anything at all locally other than writing to the temporary folder, which makes it impossible to store configuration information of any type. For that you need to assign the script to a local user account.
Here I will show you how I solved the issue using the command line, so this can be scripted. First, create the account and add it to a group with the proper privileges:
net localgroup "Administrators" %USER% /add /y
Obviously, replace %USER% and %PASSWORD% accordingly. Then, configure four items in IIS:
%windir%\system32\inetsrv\appcmd add vdir /app.name:"Default Web Site/" /path:/%FOLDER% /physicalPath:"%CD%"
%windir%\system32\inetsrv\appcmd set config -section:isapiCgiRestriction /+[path='%CD%\app.exe',allowed='true']
%windir%\system32\inetsrv\appcmd set config /section:handlers /accessPolicy:Execute,Read,Script
The first line configures all upcoming virtual folders to use the new user we just created. Then, we create a new a new virtual folder %FOLDER% which links to the path %CD%, in this case the current directory. This means that if type this under C:\my-app\www and %FOLDER% is set to my-app, then accessing http://localhost/my-app will load the files in that local folder.
Next, we add an entry under ISAPI Restrictions in IIS to allow the execution of app.exe. Once again, the name of your app would have to be used here. Without this line, IIS will not allow your CGI to run unless you turn on the ability to run unspecified scripts.
Finally, we grant the proper policies to all handlers. This is needed because in this example, we use an exe file, and by default the CGI-exe handler is not enabled. Once again these lines can be adapted to any type of CGI script, from binaries to PHP, Perl, etc.
Remember also that IIS has to be restarted or it will not see the newly created user:
Once this is all done, assuming you have IIS properly installed with the CGI and ISAPI Extensions features, your script should run fine.
It's been a week now since World of Warcraft, a ten year old game, released its latest expansion: Warlords of Draenor. This brought the game back up to over 10 million players, and of course I was one of them. I would say that most of the hype and comments out there are pretty accurate. Draenor is, so far, a pretty damn good expansion. Certainly better than MoP already, and might turn out better than Cataclysm and Burning Crusade as well. Leveling through 90-100 was more fun than the previous few expansions, with quests more varied and more cinematics than ever before. I was apprehensive to the concept of garrisons before release, but I think they turned out well, kind of like our personal daily quest hub. I do hope they will expand on the concept in the future by adding more customization options. I find the dungeons more interesting and varied than the Pandaren ones as well. Gearing up isn't much of a chore at all.
Of course it wasn't all good. The first two days of the expansion were plagued with stability issues. In fact from a performance standpoint, this was the worse release since vanilla. I understand that an incredible amount of people came back to play it, but they should have been better prepared. As for the PvP world zone, Ashran, it's too early to tell but I do worry about its longevity, with every game so far either a complete stomp from one side over the other, or a standstill in the middle of the map. Located out of the way as it is, I wish they would have just made it into a battleground, and left the faction cities in their originally planned locations.
Overall I would give Draenor a 8/10.
This is the second weekend I try to sell my iPhone 5 for $280 on Kijiji. It's unbelievable what a terrible experience it is. I can assure you, scammers are alive and kicking, and throughout both weekends, I've had nothing but scam attempts. Tons. Some I didn't even bother replying to, when I noticed how all of the texts I was receiving came from numbers outside the country, never mind the fact that this is supposed to be a local classified board. Those I did contact all replied in the exact same way:
When talking about cloud services, privacy is always a touchy subject. The usual saying is that for a cloud service to be useful, the cloud provider needs to have the keys to unlock user data. That's the only way to provide indexing, sharing, sorting and searching features. If the user wants perfect privacy and encrypts all of his data before sending it over to the cloud, then not much can be done on a single blob of cipher text.
But we've progressed beyond that point, and this argument should no longer be part of the equation. One solution is CryptDB, now several years old and a tested approach, which provides a database system that can be used to do queries on encrypted data. The idea is that rows and columns can still be defined, but the actual text in those cells can be encrypted in a smart way. For example, let's say you store notes in the cloud. Those notes can have tags attached to them. Even if you encrypt each tag with a key that only you possess, so that your cloud provider (or hackers, or spying agencies) cannot decrypt your data, you can still do comparisons between each tag. The server can know whether your encrypted search query matches the encrypted tag, without knowing what the actual text says.
And that is key to reaching the holy grail of cloud privacy. You don't need to be a privacy nut to demand this level of security. Google is among a few cloud companies really stepping up encryption, but as long as the key remains in the hands of providers, there will always be risks. I think it's passed time that client software become the maintainers of encryption keys, and not cloud servers. Most of the benefits of using the cloud can still be achieved with encrypted query processing.
Last night before going to sleep I decided to look up a Kindle book on my phone. Here's how the process went:
* Load amazon.ca and start searching, eventually finding the book I want.
* I click on Sample, the site asks me to log in. I type in my email and long password on the iPhone keyboard in order to even see a sample.
* I get told that my account isn't allowed on amazon.ca, and that I need to go on amazon.com instead.
* The site redirects me to the main page instead of the specific book I wanted, having to do the search all over again.
* Once I find the book once more, I click on Sample, the site asks me to log on once again.
* I sigh, turn off the iPhone.
I understand that Amazon has to restrict accounts, that's something publishers force upon them. But the link from one site to the next should at the very least send you to the right page, so you don't have to conduct your search once again. And the site does know that my account exists, so it should also pass on credentials to itself, so I don't have to log in twice in a row. Overall, I feel like this user experience should be better.
I've always been a Civilization fan. My very first PC game was the first of the series, on four 1.44MB floppy disks. Since then, they mostly improved the genre, but it's also become somewhat stale. With Civilization Beyond Earth, it seems like the developers were attempting to inject some new blood into the series. Unfortunately, I don't really think they achieved much.
First, this version plays very much like Civilization V. Once you look past the alien landscape, it's still the same mechanics, even the units play the same. As for the added flavor, it's not all good. The fact that you can send satellites up into the air adds an additional layer to the game, but it's a thin one. The rest is mostly cosmetics. For some reason they added a lot more natives, aliens in this case instead of barbarians in previous games, but they are mostly annoying rather than adding anything worthwhile. They also diminished how much diplomacy you can do.
I guess between this version and say, Civilization 3 or 4, I'd rather play one of the originals. In fact, I find myself playing more SimCity than this right now.
There are many programming and scripting languages out there. As time goes on, new ones get created and take flight. Right now, some of the popular newcomers include Node.js, Go and Swift. I've said before that I don't consider myself a programmer, but I still write quite a bit of code as part of my work and also for fun. I like to try out new languages, and in the past year I probably wrote projects in half a dozen different languages. It allows me to do cool stuff like the Rosetta Stone of Coding, for example.
But between all of these different languages, I still come back to Perl as my main choice, for two main reasons. First, the giant catalog of modules available out there. One issue with new languages is that while they have the advantage of starting fresh and reinventing concepts in a new, imaginative way, they also don't have the array of functions that older languages have. Perl is said to have a module for everything you could possibly want, and if you check on CPAN, you will see what that means. Whether you want to access the Windows Registry, do low level network coding, authenticate against Twitter or do complex math functions, there's a module for that.
The other reason is its portability. A lot of new projects have their installation instructions say something like Use bower to install. This certainly makes things fast and easy if you already have a development platform installed, but if you don't, you still need to go through the non-intuitive steps needed to get the multiple layers of dependencies. A lot of projects, especially in the open source world, also assume you're on Linux. Things start to break pretty quickly on Windows, OS X or other platforms. Perl is not only native to pretty much every platform out there, it comes pre-installed on most of them. Plus, there are neat utilities like Perl Dev Kit, what I use to create cross-platform binaries, allowing me to release a single executable with no dependencies. Sure it's not the most efficient way to do things, but sometimes it's the most convenient, when you want to use a script on a system with no development library installed, or share it with someone who wouldn't know what a command line looks like.
This isn't to say that I don't enjoy writing in other languages. Lately I've been playing with Node.js and for certain things, it's very neat. But even though Perl doesn't seem to have that cool factor anymore, and isn't discussed as much on Internet forums, I haven't found a language that can replace it yet. Oh, and as an aside, my least favorite language of all: PowerShell. A thing of nightmares...
Virtual Machines have become the norm in the IT industry. Most servers out there have been virtualized in one way or another, whether it's in the public cloud on Microsoft Azure or AWS, or even on-premise, where a single server will often run a dozen VMs for various purposes. But VMs have one inherent flaw: Overhead. Each VM is actually a full operating system running in its own container along with whatever required processes. In many cases this is fine, but as VMs have become more and more ubiquitous, this is sometimes pushed to the extreme. It's not rare to see VMs running a single app. Worse, if the app needs things like SQL Server, IIS, or other common roles, each VM will actually be configured with those things turned on. You end up with many VMs replicating the same roles, when it's not always necessary, and can make things like management and backup quite complex.
There's been a lot of development done to try and minimize this overhead, like hypervizors managing memory in a way that allows a single page in RAM to be replicated for many VMs when they all need the exact same data, but there are limits to this type of mitigation. That's where Docker comes in. Docker, and other similar systems, takes the concept of a VM and scales it down to the process level. Instead of spinning up a whole operating system, with all of its overhead, in order to run an application, we now have applications that live in their own containers. RightScale has a good article describing the differences between VMs and Docker. In a nutshell, you take an application, its libraries, files and other dependencies, and put them all inside of a container. The result is something that takes less time to start up, and less resources, because all of your contained apps can share a single OS, yet they can still be distributed easily.
Last week Microsoft announced that Docker support was coming to Windows Server. I think this will help propel the technology, and while the past 10 years have been the years of VMs, I think in a few years we will see that start to shift. VMs are certainly not going away, but instead of having the impulse to start a new VM for every project, I suspect a lot of people will keep a single powerful system running, and spin up containers instead.
View full archive
Hi, my name is Patrick Lambert and I'm a freelance content creator living in Montreal, Canada. I have over 15 years of experience in technology and am A+, i-Net+, MCSA, MCTS and Linux certified.
I've written for...
...and many more!
Movies: Star Wars, Planet of the Apes
TV shows: The Walking Dead, Breaking Bad, Game of Thrones
Devices: PC, iPhone, iPad
Games: Half Life 2, KOTOR, Fallout 3
MMOs: World of Warcraft, SWTOR
- NodePoint - Ticket tracking system
- Steam - My Steam profile
- GitHub - My GitHub code
- CPAN - My Perl modules
- IMDb - My movie ratings
- Android Apps - The Android Apps I've created.
- Commissions - Information if you want to commission art from me.
- Aurebesh - Learn the language of Star Wars.
- Crypt - Online encryption and hashing service.
- Headers - Headers and browser information.
- WebDB - Simple cloud configuration store.
- Samples - Hire me here.
- 3D Models - The 3D models I've done and released for free on ShareCG.
(C) 2014 Patrick Lambert