<$BlogRSDURL$>
WS-BLOG
|
Wednesday, May 19, 2004
  Blogs and Junk food for the last couple of months, i've been trying to improve my diet by eating less carbs, more high quality protein, less junk food, and more vegies. so far its gone pretty well. i've lost some weight, my blood sugar levels have stablized, and i'm saving money by not going out to lunch everyday.

however, my techinical diet is starting to suffer from junk food blogs, seductive videos on channel 9 and the like. i think microsoft has utilized blogs and channel9 VERY intelligently. the videos on channel9 are so cool because the people are so interesting and they are so excited about what they are talking about. i'm a sucker for those videos. i get all pumped up thinking how cool microsoft and its people are. however, its like eating a big bowl of pasta: you feel great afterwards, but all you've really consumed is empty calories. Same thing goes for most blogs. Its a bummer that most of the blogs i really like have gone dormant. No doubt those bloggers have decided to engage in more rewarding activities.

I think its ironic that while the low-carb craze is sweeping the nation, high-carb madness has hit the .NET technical community.

and no, the irony that i am complaining about blogs ON my blog is not lost on me ;-) 
|
Thursday, May 13, 2004
  Localization .. the easier way Great news for those of us who run web sites that should localize but don't. MS is releasing something called the Microsoft Application Translator that should help you localize your web sites and applications easier than ever before.  
|
  My, My, My If you haven't read the article on MSDN entitled Navigate the .NET Framework and Your Projects with "My" I suggest you do. Perhaps one day soon, VB developers won't have to write any code at all ;-) 
|
  My, My, My If you haven't read the article on MSDN entitled I suggest you do. Perhaps one day soon, VB developers won't have to write any code at all ;-)  6:42 PM
|
Wednesday, May 12, 2004
  HttpRequestValidationException I stumbled across something interesting tonight as I set out to write an HTTP module in ASP.NET that parses Request input and either raises an error or redirects the user to another page if it see's any "suspicious" activity. Suspicious activity (at least in my mind) includes those little tricks that hackers employ such as embedding script code in form input for the purposes of changing the behavior of the application or web site. As it turns out, ASP.NET already performs such validation and raises a HttpRequestValidationException if it see's any form input that starts with "<" followed by another not whitespace character.

Just had to share that. 
|
Friday, May 07, 2004
  What a Project! Celemens Vaster comes up for air in this blog post from what sounds like a really cool (if not exhausting) project. He's using about every service of COM+ imaginable, even the hardly EVER mentioned Compensanting Resource Manager! WOW!! too cool! 
|
Wednesday, May 05, 2004
  The Death of the Browser Death to the browser!

Mark the time and the place 'cause I've just had my web service epiphany. In the time before .net - the release of the first COM based SOAP Toolkit from Microsoft actually - I questioned the place of web services. In fact, for years I've wondered if web services, SOA, SOAP or any of the "buzz words dejur" (thanks for coining that don) would go the way of so many other over hyped technologies that have not lived up to their promise. The question in my mind (and probably everyone else's) was: are web services a solution to a problem that doesn't truly exist or are web services truly the promised land. my take on this issue initially was that web services were a solution with out a problem. I mean who would be foolish enough to try and gather non-relational data from such fragile sources as email, or god forbid screen scraping. But these were the ad hoc solutions that companies like Microsoft led us to believe (and if you don't believe me, i can email you a slides from the 2 day .net training that i used to do for Microsoft - the first few slide in the first module make this claim).

While i still don't think that the masses are doing a lot of screen scraping stuff that would be better served by web services, i think web services can and will prove to be a new way of thinking for companies that want to make their presence known on the internet. Here's my logic:

1. your company’s web site is your primary public facing electronic presence. People who want your data (e.g. the stuff mixed in with all that HTML) will try and go to your site to find it rather than trying to contact you directly. This means you have anonymous users interested in your data.
2. your company’s web site is a representation in html, meaning you don’t let the casual internet user run SQL queries directly against your data or view your public email folders in exchange. Your web site is a secured, permitted access point into your network. A window at your drive through if you will.
3. HTML sucks in just about every way imaginable. HTML and even DHTML will not provide a suitable platform for rich user experience going forward. Accept the fact that this technology has reached its limit.
4. Your data increases in value the more the more distributed it becomes. Imagine that over night every other site on the internet provided links to your site. The simple fact that other sites can incorporate your data (via a link to your site) into theirs provides a way for you to distribute your presence across the internet. They way you distribute your presence across the internet is by distributing your data.

So there’s lots and lots of really great data out there on the internet that’s trapped in HTML. I won’t go into detail on why HTML sucks as a programmatic representation of data (next time ;-), but trust me in that fact for now. And everyone wants millions and millions of web page hits per month on their site, and everyone wants the user to stay at their site for as long as possible. The problem is, you can only achieve so much with a single web site. The data from your web site is most likely relevant in other contexts as well. For example, say that you sell premium Brembo Brakes, and provide detailed information on your web site. Well it turns out that Nissan sells a car with Brembo breaks and uses that fact as a selling point. Isn’t it in nissan’s best interest and your’s to provide more information on those breaks in order to inform and ultimately help sell the customer on the car and its breaking system? I think it is. And yes Nissan could just link to the information on your web site. But Nissan isn’t doing that. Why? Because when you research that car on Nissan’s site, Nissan wants you to go through the “build my car” option which ultimately brings you to the “find this car at a local retailer” option. In other words, Nissan wants to keep you focused on buying that car, not distracted by your breaking system. In this scenario, you as the breaking system manufacturer wouldn’t even want Nissan to link to your site because you don’t want to interrupt that process that Nissan is trying to bring the user through. The process is of course buying a car. With out car sales, your breaking system is worthless. The same goes with your information about your breaking system. Your breaking system information is so much more powerful in the context of nissan’s “I want to build and buy this car” experience. In my mind, this is what web services is promising: the technology you need to put your data into the right context. As we have seen on the tv show the Apprentice, the success of your hotdog stand is all about location, location, location. And thus the success of your internet facing data is all about your context, context, context.
 
|
Monday, May 03, 2004
  Wincopy - alpha release hello again. i've been chiping away on a rewrite of a microsoft utility called RichCopy. there are others like it in the market such as xxcopy and robocopy. however i wanted to take a stab at writing one myself.

what does this do? wincopy implements a windows explorer type interface that allows you to drag and drop folders from the left treeview to the right (only). at that point the application starts a recursive subroutine that searches through the directory structure of the source and copies files in each of the directories. threads are spawned off for each directory search and file copy task ala asynchronous delegates. the pause, stop and properties functionality are currently functional.

what doesn't this do? currently the save and open functions are not implemented. these functions will be used to save and open config settings.

what's next?
1. save/open buttons for config settings
2. multithreaded algorithm for large file copy tasks
3. throttleing options: max thread count for directory search, max thread count for file copy, fix problem with thread priority
4. better looking tool bar icons
the next release will be much more polished than this one and it should be out in about two weeks.

the source code is not release under any sort of open source license or any other kind of license. just don't hold me responsible for anything that might happen ;-)

the visual studio project files, source code, and debug build can be found on deviq.com/derekb/wincopy.zip

thanks to the very generous steve schofield of asp.net fame for providing me some space on his server. 
a brain dump of techincal stuff that goes through my head each day

ARCHIVES
03/01/2004 - 04/01/2004 / 04/01/2004 - 05/01/2004 / 05/01/2004 - 06/01/2004 / 07/01/2004 - 08/01/2004 / 10/01/2004 - 11/01/2004 / 04/01/2005 - 05/01/2005 /


Powered by Blogger

Weblog Commenting and Trackback by HaloScan.com