Thursday, December 23, 2004

It's Alive!!!

It's alive!!! The old Mac has had some life breathed back into it. The new power supply arrived today, and the old dog has been running ever since I installed it. I was a little disappointed to find that my old game (Warlords) had been deleted and replaced with Sim City 2000, but I was awash in nostalgia as I tinkered around with it. I played through all of the old Mac sounds. I thought it was really cool to find a sound that I made back in high school (a belch, how eloquent). I also discovered that my cousins (Jake and Kaitlyin, I know where you live), had added some sounds of their own, along with some "clever" jokes. The Mac still runs fantastic. It has system 7.1 loaded, along with Microsoft Word, Excel, Canvas, and the usual Mac fare. It also has some additional software for the HP printer (which I have) and the Teleport Modem (which I don't have). I'm debating if it would be worthwhile to spend another $30 to get a NuBus card and an ethernet adapter. If I did, I could put the Mac on our home network. I'm not really sure it would be worth the effort though, as I can currently sneaker net files over to it if I need to. I'm going to scrounge around the web for a while tonight to see if I can find a free download for the Warlords game. I'm sure that all of the original disks are long gone. It made me feel like a kid all over again to be playing on the old Mac!



It's alive!

The offending power supply

Sunday, December 5, 2004

Sad Mac

We did some shopping today, and I picked up a 3.6V 1/2AA battery at Radio Shack. Although I'm sure it was needed, it didn't solve the greater problem of getting the thing to boot. After some more testing, I think I've determined that the root of the problem is a dead power supply. I cracked it open and watched it in operation. There is some type of electro-mechanical contact switch that is getting thrown, and immediately lost. I can see the switch move, but it never stays. Occassionally, it will stick down for a second and I'll hear the happy mac chime. So I'm pretty sure that the motherboard, hard drive, floppy, and everything else are okay. I checked out eBay, and it looks like I can get a complete Mac IIsi minus a hard drive for $6 (plus $8 shipping). I'll probably get it and use it for spare parts. I'm still not entirely sure what I'm going to do with it once it runs (other than play Warlords). Any suggestions?

Return of the Mac



Ye Olde Mac IIsi returned to me this weekend. This was our family computer while I was in high school. The Mac is now getting close to 15 years old, and might still be functional. My grandmother had been using it for the past couple of years for web access, e-mail, and solitaire. A couple of months ago she started complaining the it wouldn't boot up anymore. My dad searched on the web and got a great deal on a Dell as a replacement. She has since been setup with a DSL account, and should be back on her way. I asked my dad if I could get the old Mac, mostly out of nostalgia for it, and to see if I could get it running again.


I got it all pieced back together and turned it on. It fired up on the first try to the smiley Mac. The first thing I noticed was the smell of smoke and air fresheners. My dad told me that the Mac had been connected to an outlet that was controlled by a light switch. At first, I thought it was a simple matter of the light switch being off that caused the beast not to boot. I was playing around a bit, and noticed that it was set in Black & White mode. I knew from the years of using it that it supported color, so I opened the control panel, and switched it to color mode. Bad move. The IIsi immediately shut down. I hit the power key on the keyboard, and the power light blinked for a second, but it was right back out. This must be what my grandmother had been talking about. As the PC was very old, I wasn't sure if this could be a dust issue (dust in a PC can create shorts), a blown motherboard, or something else. It had always been connected to a very nice surge suppressor, but that didn't totally count out a massive surge causing problems. Also, smoke and spray air fresheners can eventually coat the electronics and cause a short.


I opened the case and started poking around. The first thing I noticed was how cleanly the old Mac was assembled. If you open a new Dell, you'll find that everything is neatly wire tied away, but you still end up with cables criss-crossing the case and a general look of confusion inside the case. The Mac was very clean and tidy. The connectors on the motherboard for the floppy and hard drives were located right next to the mounting position of the drives within the case. This made for a very short cable run (a couple of inches, if that). No tools were needed to dismantle it, everything had locking tabs that made it easy to remove and reinstall the parts. The power supply even snapped into position. In fact, the only power cable run in the entire case was from the motherboard to the hard drive. Kudos to Apple for a great design. After some looking around, I noticed a Lithium battery in the center of the board. This set off a flag in my mind. Most PCs have a battery that supplies power to a segment of memory on the mainboard. This can be used to store settings, keep the system clock running, or any of several other options. In the Macs case, the battery supplies power to the segment of memory that retain settings. I realized that it was when I changed settings (black and white to color) that the Mac powered down. I grabbed an LED and checked for any voltage on the battery. It was totally dead. So I'm suspecting that this is the root cause for the Mac not to boot. I can order a new batter online for $6, or pick one up at Radio Shack for $15. Of course, I could just go on eBay and buy a complete Mac IIsi for $25, but that wouldn't be nearly as fun as reviving this old dog.


My mission now is to locate a 3.6V 1/2AA size Lithium battery and attempt to revive the Mac. Once I do, I'll be happily playing Warlords again!




Tuesday, November 23, 2004

Serial Ports

Serial Ports are a common tool we use to communicate with devices at work. Usually, if a device has a serial interface, you can breath a sigh of relief that you are going to have success with communicating with it. There are still some things to watch out for though. Getting the baud rate, parity, data bits, and stop bits configured properly is critical. I helped a colleague at work today with just such a problem. He was trying to receive data in his .NET client from a bar code scanner that we had configured in the Descarte OmniServer OPC server. He would get the initial data change event when the OPC item was created, but nothing after that. I wrote the prototype he was basing it on, and I knew that it worked, as I had used the same barcode scanner in testing it. So we started poking around. First, we brought up HyperTerm and looked to see if we were getting any data. Sure enough, data was coming across. Then we went back to the OmniServer configuration to be sure that our topic was configured to use COM1, and that the device was using the right protocol, in this case an Intermek protocol. It was, but still, no data. So then I got to thinking, "I wonder if HyperTerm is using a different configuration". That was it! In HyperTerm, we were setup for 9600 baud 7-e-1 configuration. In OmniServer, we were looking at 9600 baud 8-n-1. Big difference! The barcode scanner was sending 7 data bits, but our OPC server was looking for 8 data bits. That's why it never sent a data change event. We got it reconfigured and it worked beautifully from there.

Sunday, November 7, 2004

It Works!

I've won my fight with .NET Remoting! Thank goodness for Ingo Rammer and Google, as those two resources provided the answers to all of the "gotchas" that bit me on this. My final solution involves a shared base class assembly, which defines the abstract base class for each of the objects exposed by my web service. Next, there is the server implementation, which is composed of a server activated class factory serving up the client activated remoted objects. And finally, there is the client piece, which first gets an instance of the class factory, and uses that reference to get instances of the client activated objects.


First, let's talk about the shared assembly. This is the easiest one. I needed to create, at a minimum, two abstract classes in my shared assembly. The first abstract class describes my class factory.

public abstract class ClassFactoryBase : MarshalByRefObject
{
   public abstract GetCAO();
}

So far so good. Now comes the first gotcha I experienced. My client activated object is exposing the functionality of an existing COM object on the server. So my first idea was to simply implement interface exposed by this COM object. This was a bad idea, and caused remoting to barf big time. So instead, my base class for the client activated implements the interface, but doesn't put it in the declaration.

public abstract class CAOBase : MarshalByRefObject
{
   public abstract bool COMObjectMethod();
}

All of this code went into a file I named Shared.cs, and I compiled it to an assembly, Shared.dll.
Next, I needed to implement the factory and the client activated object.

public class ClassFactory : ClassFactoryBase
{
   // Generic constructor required for Remoting
   public ClassFactory()
   {
   }

   public override GetCAO()
   {
      return new CAO();
   }
}

public class CAO : CAOBase
{
   internal Interop.ComObject comroot;

   // Generic constructor required for Remoting
   public CAO()
   {
      comroot = new Interop.ComObject();
   }

   public override COMObjectMethod()
   {
      return comroot.COMObjectMethod()
   }
}

Here, the ClassFactory creates a new instance of the CAO object on demand and passes it back. The CAO object creates an instance of the COM object to be exposed. The methods of the CAO object then pass through the COM object layer. These classes I put in a file called server.cs and compiled to an assembly named Server.dll

I wanted to use IIS as the host for my remote classes, so I needed to do a couple of things to enable this. First, I created a virtual directory in the IIS admin. In that virtual directory, I created a bin directory. I copied my Shared.dll and Server.dll into the bin directory, along with the interop.ComObject.dll. Finally, I created a web.config file and placed this in the root of the virtual directory. The web.config looked like this:

<configuration>
   <system.runtime.remoting>
      <application>
         <service>
            <wellknown
               mode="SingleCall"
               type="ClassFactory,Server"
               objectUri="ClassFactoryURI.soap" />
            </service>
         </application>
      </system.runtime.remoting>
   <system.web>
</configuration>

This was another stumbling block for me. Originally, I had excluded the ".soap" extension from my objectURI. I had read something that indicated that it was unnecessary when hosting your object in IIS. This was dead wrong. Unless your object URI ends with ".soap" or ".rem", IIS will not pass the method calls on to the remoted object. This took me a while to figure out, so don't make the same mistake. Fortunately, everything else was cake from this point, as IIS takes care of all of the nastiness of load balancing, connection pooling, and security for connecting to your remote object.

Finally, it's time to implement the client. Here is my quick and dirty client code:

public class Client
{
   public static void Main( string[] args )
   {
      ClassFactoryBase factory =
         (ClassFactoryBase)Activator.GetObject(
            typeof(ClassFactoryBase),
            "http://remotinghostserver/VirtualDirectory/ClassFactoryURI.soap" );
      CAOBase cao = factory.GetCAO();
      if ( cao.COMObjectMethod() )
      {
         MessageBox.Show( "Success!" )
      }
   }
}

This code goes in Client.cs, compiles to Client.exe, and is deployed with only the Client.exe and Shared.dll. I didn't even need a config file! This is because I'm using the Activator.GetObject method to create an instance of my object. Why am I doing this? Well, another option would be to use soapsuds.exe to generate the metadata for my remoted object and reference this when compiling my client. When done this way, the client can simply use the new keyword when appropriate remoting configuration information is in the app.config file. Very convenient for the code. Unfortunately, soapsuds.exe is broken. For some reason, when you host your remoting classes in IIS, soapsuds makes mistakes when attempting to generate the metadata. The result is that you will successfully expose your class factory, but will get a type case exception when attempting to get an instance of your CAO. This is seriously bad news for the solution I needed to make. Another option would be to use share interfaces (as opposed to share base abstract classes). When using share interfaces, you can again configure the app.config file to allow for the new keyword to be used. However, there is another gotcha here. When using shared interfaces, the CAOs instances can not be passed as arguments to other methods on other remote objects. For our solution, several of the CAOs need to interact with each other. Using abstract base classes allows us to pass these references as arguments to our other remote objects. The only drawback here is that we are forced to use the Activator.GetObject() call to instantiate our remote object rather than the new keyword. It's a small price to pay I think.


So what do I do from here? The next thing I need to do is verify that the object lifetime is being managed properly. I don't want to strand a bunch of instances of my remote objects on the server. So my next effort will be to investigate CAO object lifetime leases. Once I get there and interesting information to pass on, I'll be sure to post it here.

Thursday, November 4, 2004

My Continuing Fight with .NET Remoting

So I went for the sleep option around 3:15am. I should have picked option two (coffee) because my brain was still way too active to let me fall asleep. I kept thinking about different options to try. In any case, I'm a couple of steps close to getting my solution to work. For one of our systems, we have a set of COM objects that are hosted in COM+. These objects are exported through COM+ / DCOM to several clients across a LAN / WAN environment. The clients create an instance of a broker object exposed through COM+, and then use that broker to create instances of other server business objects (SBOs). Each of these instances is intended to hold state (they connect to a variety of ERP and database systems, so opening and closing them often has a high transaction cost). Our objective with this project is two-fold. First, the client that activates these distributed components is Wonderware's InTouch 7.x. The license costs for our customer are getting a little high, so they are looking to replace the InTouch client with a .NET client written in C#. I had originally considered using XML Web Services hosted in IIS to wrap the functionality of the SBOs. This was before I learned that the SBOs held state. XML Web Services is a stateless architecture, so it wouldn't due for accomplishing our task. So that is why I chose to go the .NET Remoting route. I'm still going to use IIS to host the remoting component, as it will save me time by acting as the hosting control, provide security, and also take care of channel and load management.


So after banging my head on the wall last night, I got this far: I have a Client Activated Object (CAO) that is hosted by IIS. I've written the web.config file to expose the CAO. I used "soapsuds.exe -nowp -ia:MyRemoteCode -oa:MyRemoteCode_Proxy.dll" to generate a meta-data proxy for the remote object. I created a client that registers the remote object, and then creates an instance of it. So far so good. Here is where the trouble starts though. I create a second remoteable class that also inherits from MarshalByRefObject. This one is instantiated by making a method call on the first remoted object. However, when the client calls "firstRemoteObject.GetSecondRemoteObject()" I get a type mismatch error on the return type. Evidently, the meta-data generated by soapsuds.exe doesn't exactly match the typing information passed back by the remote call. So this is where I am stuck tonight. I spent the day working on another project, so I haven't been able to touch it yet. If I can stay concious long enough I may try again tonight. If I get it solved, I'll post sample code.

CAOs hosted by IIS that implement COM Interop Interfaces

Wow, did I have an intrigueing assignment today. I have a client that is going to be developed in C# (Microsoft .NET technology). This client is going to remotely access another .NET assembly on a server machine, across the network. The server side assembly is going to be client activated (CAO), and the CAO needs to expose the functionality of a COM component through interop. I thought I could just create a class that inherits from MarshalByRefObject and implements the interface of the interop component.... but NOOOOOO! Couldn't be that simple could it? So instead, I'm sitting here at 2am still banging out the intricacies of creating and passing references around across application domains. So far, I've learned that implenting the interface for an interop component is seriously bad news. Forget about remoting your object after you've done that. Next, don't every install the .NET Framework 2.0 beta on your primary development box. It did me the courtesy of modifying all of my IIS virtual directories to automatically select the 2.0 revision rather than the stable 1.1 revision. So now I need to manually select 1.1 for all existing and newly created virtual directories. Thanks .NET 2.0. So now my stickler is trying to figure out how to get two remoted objects that are hosted in the same application domain to interact. Not a simple task so far. More on this wonderful story once I get a cup of coffee... or sleep.

Thursday, July 1, 2004

Visual Studio Express


Free stuff is good! Microsoft is releasing a new variant of their Visual Studio development products. The "Express" line is a stripped down beta version of the upcoming Visual Studio 2005 products. I've been looking for a way to get a cheap copy of Visual Studio at home for a while, and this just fell in my lap. It is going to be a huge help in studying for my certification exams, and if I want to tinker around with something, I can!

Sunday, June 13, 2004

Netgear WG602 Wireless Access Point

Last week I posted that the Netgear WG602 had a backdoor password, and that everyone should upgrade to the firmware version 1.7.14. Unfortunately, that firmware revision didn't get rid of tha backdoor, it just changed the username and password. Netgear has released a new firmware revision, 1.7.15, which eliminates the backdoor.

http://kbserver.netgear.com/support_details.asp?dnldID=741

If you use this product at home, I recommend that you download the firmware upgrade and install it.


How could the backdoor affect me?

Anyone who can connect to your access point would be able to change the settings for your access point. If you have enabled security and filtered the list of MAC addresses that can connect, this backdoor will have very little affect on you. However, I would still recommend that you patch this backdoor.


What models have the bug?

Only the Netgear WG602 version 1.0 product is known to exhibit this bug. It is based off of a z-com chipset. The WG602 version 2.0 product does not have this vulnerability.


What else can I do to protect my wireless network?

At the very least, change the default admin password and enable WEP security. Unfortunately, nearly all networking equipment intended for home use is shipped in an unsecured state by default. This provides the least confusion when setting up the network, but also leaves you open to attack. By changing the default password and enabling WEP, you are preventing the casual and curious wireless surfer from hopping on your network.


I've enabled WEP and changed the password, now what?

Great, you've taken the first steps towards securing your network. Unfortunately, the WEP standard has a couple of flaws. It uses some common keys in the encryption process that can be easily discovered. Anyone with enough free time on their hands can sit outside your network and eventually determine the WEP key and get on your network. There are some additional steps you can take to protect yourself though.


- Disable SID broadcasting.

This feature is not available on all Wireless Access Points. Your access point broadcasts a beacon on a regular interval to tell wireless users that it is available. This beacon includes the name, or SID of your WAP. By turning this beacon off, wireless surfers will not know that your WAP exists unless they specifically look for it.


- Force a VPN for all wireless clients.

If you want to get really secure, connect your WAP to a dead pool on your network. By dead pool, I mean a network connection that has no access to the web, the company intranet, or any other resources on your network. It is completely isolated. From there, users must create a VPN connection to any resource on your intranet. There are many benefits to this type of setup. First, you can turn off MAC address filtering and WEP. People who connect can't do anything without the VPN, and this reduces the maintenance needed to updated WEP keys and MAC lists. Secondly, a VPN connection provides a much stronger encryption level, protecting any data you may transmit wirelessly.

Sunday, June 6, 2004

Netgear WG602 Wireless AP Security Problem

For anyone out there using a Netgear WG602 Wireless Access Point, be aware that a backdoor password was recently discovered that would allow anyone to hack your device. Be sure to go to the Netgear support page to download the latest firmware for your access point. The latest firmware revision removes the backdoor password.

Wednesday, May 19, 2004

Book Review : Unleashing the Killer App

For my business information systems class, we were assigned to read the book "Unleashing the Killer App : Digital Strategies for Market Dominance" by Larry Downes & Chunka Mui. I thought it was an outstanding book covering the changes that any Killer App technology can have on a firm, and how best to enable your firm to not only cultivate a Killer App, but to know how to deal with the result of unleashing one.


The book is primarily focused on a discussion of the 12 critical tenets to unleashing a killer app:


  1. Outsource to the customer
  2. Cannabalize your markets
  3. Treat each customer as a market segment of one
  4. Create communities of value
  5. Replace rude interfaces with learning interfaces
  6. Ensure continuity for the customer, not yourself
  7. Give away as much information as you can
  8. Structure every transaction as a joint venture
  9. Treat your assets as liabilities
  10. Destroy your value chain
  11. Manage innovation as a portfolio of options
  12. Hire the children

I found the book very easy to read and easy to pick up on the concepts that the authors were trying to get across. This is a book that you can sit down and read in a single sitting, and come away with ideas of how to make your business better, and how to be Killer App friendly. You can either purchase a hard copy from Amazon, or read the entire book online.


Don't let the title fool you, this is not a how to manual on creating killer app technology. Instead, it is a primer on how killer apps and new technology fundamentally effect the economic and business environment. Heavily based on the economic research of Coase, and Moore's and Metcalfe's laws, the authors put forth sound reasoning on how new technologies will continue to change businesses. The book was originally released in 1996, and some of the examples may seem a bit dated, but the information and guidance the book provides is timeless. For anyone desiring to succeed in business today, the topics shouldn't be new to you, but it should be a reminder of the pitfalls that some business succumb to.


After reading this book, I immediately came away with ideas of how to change our business for the better. We are a services company, and although we don't have a specific product, our services are our killer app. In order to enhance that killer app status, we can utilize some of basic principals enumerated in the chapters to continuously improve our business.

Wednesday, May 5, 2004

Certified!

I passed another Microsoft Certification exam today! Woohoo! I completed exam 70-315 : Developing Web Applications with Visual C# .NET. I had been studying for the exam for a while, but I had not taken the time to really concentrate on it. So yesterday I spent all day studying the practice tests and reviewing the book. By mid-afternoon, I felt like I was as ready as I was ever going to be. I scheduled my exam for 9:45 this morning, and passed with flying colors. I scored a 886 out of a possible 1000 points. A score of 700 was needed to pass. I'm glad I finally forced myself to get that exam out of the way. I have one more exam to go before I achieve my Microsoft Certified Application Developer (MCAD) status, and two more after that to be a Microsoft Certified Solution Developer (MCSD).

Wednesday, March 31, 2004

.NET Interop

Yesterday, a coworker of mine was getting frustrated because he was trying to access a COM component from within the .NET framework, and was having no joy. I had written a small test application to access the same component (an OPC Server) a year before, so I sent him the code. He pulled the code into his visual studio project, but still no joy. After much research, we learned that the code would only work on a previous version of the framework (version 1.0). For version 1.1, the code, and in fact the entire solution architecture of using the automation wrapper, would no longer work. After much researching and quite a bit of stop and go testing, I've finally been able to access the OPC server from .NET 1.1 framework. My new test code does a lot more than my previous one did too.


The OPC foundation distributes a core components download which includes runtime callable wrappers (RCW) for the OPC components. These RCW interfaces only work in the .NET 1.1 framework. I was able to use these interfaces to connect to the server, setup I/O items, perform reads and writes, and listen for data change events from the server. This is pretty exciting for me because a lot of the software that our company writes is based off of OPC server information. This gets our foot in the door to doing more development in the .NET framework, which I am a big fan of. Of course, the OPC foundation has already developed a .NET API which will make all of this very easy to do, but you have to be a member of the foundation to get the API. Membership costs $1,500. We used to be members, but we let our membership lapse last year. Maybe we will join again, but if not, at least we have a solution for gaining access from within the framework.