Skip navigation

Category Archives: Development

Once again, I’m apparently working in uncharted territory for the Dojo framework. I am working on a rich application that uses a Tree View to navigate certain data. When a user makes changes, the tree needs to update itself. In previous applications I’ve resorted to refreshing the entire tree due to the fact that the dijit.Tree is not very developer friendly. It is difficult to set up in the first place due to lack of documentation. This time however, I decided to dive into the code and figure out how to refresh only certain nodes in the tree. Here is what I found, it is pretty straight forward:

dojo.provide("my.ContentTreeNode");

dojo.require("dijit.Tree");

dojo.declare("my.ContentTreeNode", [dijit._TreeNode], {

	_setIconClassAttr: function(iconClass) {
		dojo.query("> .dijitTreeRow > .dijitTreeContent > .dijitIcon", this.domNode).attr("class", "dijitIcon dijitTreeIcon " + iconClass);
	},

	updateChildItems: function(items) {
		this.clearChildren();
		// set the child items of the node
		this.setChildItems(items);

		this.tree._expandNode(this, true);
	},

	clearChildren: function() {
		var childNodes = this.getChildren();

		dojo.forEach(childNodes, function(childNode) {
			// remove the node from the tree's item node map
			delete this.tree._itemNodesMap[this.tree.model.getIdentity(childNode.item)];
			// remove each node
			this.removeChild(childNode);
		}, this);
	}

});

As you can see, I extended dijit._TreeNode in order to add a few methods.
In order to use this tree node, you’ll also have to extend the dijit.Tree:

dojo.provide("MyContentTree");

dojo.require("dijit.Tree");
dojo.require("my.ContentTreeNode");

dojo.declare("my.ContentTree", [ dijit.Tree ], {
	_createTreeNode: function(args) {
		return new my.ContentTreeNode(args);
	}
});
Advertisements

I just thought I’d share this since Dojo’s documentation is lacking useful information. It seems like no matter how many documentation websites the guys at Dojo create, they never fully document everything. Here is a perfect example. I couldn’t for the life of me find out how to programatically create a dijit.PopupMenuItem. Through trial and error I figured this out:

var menuItem = new dijit.PopupMenuItem({
	label: "My Item",
	iconClass: "myIconClass",
	popup: new dijit.Menu()
});

The key being the popup property. If you don’t initialize the popup property with an instance of a new dijit.Menu, the whole thing doesn’t work. You can then use menuItem.popup.addChild(…) to add new menu items to the popup menu item.

I just wanted to quickly throw this out there because I had a really hard time finding any information out there on this issue.  The scenario I have is a base abstract class which contains a handful of properties.  There are three types which extend this class and add their own relationships.

The problem I ran into is that by default the auto mapper for Fluent automatically uses the type names as the discriminator to determine which type should be loaded.  What this means is the column in the database must be a varchar.  I think it is pretty obnoxious to have the full type of your class in every single row.  What I want is to use an enum to determine which type is used, and have it stored as an integer in the database.  You’d think this would be a pretty simple and well documented use case; it is not.

First off, you’ll need to create an auto mapping override for the parent class:

public class ParentMap : IAutoMappingOverride {
    public void Override(AutoMapping mapping) {
        mapping.DiscriminateSubClassesOnColumn("Type", 0);
    }
}

Regardless, I figured out how to do what I wanted to do.  In your auto mapping configuration class, you’ll need to tell the auto mapper to ignore the types which extend the abstract base class, like so:

public override bool ShouldMap(System.Type type) {
	return type != typeof(ChildType1) && type != typeof(ChildType2) && type != typeof(ChildType3);
}

Next, you need to create your own SubclassMap which manually maps all of the properties specific to the child class. Just to be clear, you do not have to map any of the properties defined in the parent class. These properties should be handled by the auto mapper (and if you need to modify the mappings you should create an Auto Mapping Override.

class ChildType1Map : SubclassMap {
    public ChildType1Map() {
        this.DiscriminatorValue((int)MyEnum.ChildType1);
        this.References(x => x.AnotherThing);
        this.Map(x => x.Name);
    }
}

The important part here is the call to DiscriminatorValue. This is where you instruct NHibernate what the value of the discriminator column will be for this type. Do the same for your other entities that extend the parent class. In order to get Fluent to pick up on these mapping files you’ll need to register them in your AutoPersistenceModel:

model.Add(typeof(ResourceFolderMap));

That should be all it takes. Hopefully this saves someone the hours I spent researching this.

Introduction

I’ve been working dilligently lately on a new internal web application.  I made the decision long ago that the app would be a full blown AJAX application.  After deciding to go with DWR, Java and Hibernate running on Tomcat, I quickly noticed that my situation was pretty rare.  It seems as though everyone out there also uses Spring in combination with DWR and Hibernate.  I briefly went through the spring documentation and could not find a reason why I would need to use it for our application.

Along the way I’ve run into a few issues specifically with DWR and Hibernate and I’d like to share the experience for anyone else out there looking to do the same.

OpenSessionInViewFilter

I kept hearing about this magical filter for Spring that handles all of your Hibernate session management inside of AJAX application woes.  I was having all kinds of issues with sessions being closed, lazy loaded collections throwing exceptions etc.  The solution was this servlet filter:

public class OpenSessionInViewFilter implements Filter {

private static Log log = LogFactory.getLog(OpenSessionInViewFilter.class);

private SessionFactory sf;

public void doFilter(ServletRequest request, ServletResponse response,
FilterChain chain) throws IOException, ServletException {

try {
log.debug(“Starting a database transaction”);
Factory.beginTransaction();

// Call the next filter (continue request processing)
chain.doFilter(request, response);

// Commit and cleanup
log.debug(“Committing the database transaction”);
Factory.commitTransaction();

} catch (StaleObjectStateException staleEx) {
log
.error(“This interceptor does not implement optimistic concurrency control!”);
log
.error(“Your application will not work until you add compensation actions!”);
// Rollback, close everything, possibly compensate for any permanent
// changes
// during the conversation, and finally restart business
// conversation. Maybe
// give the user of the application a chance to merge some of his
// work with
// fresh data… what you do here depends on your applications
// design.
throw staleEx;
} catch (Throwable ex) {
// Rollback only
ex.printStackTrace();

// log the error
log.error(“Error in application”, ex);

try {
if (sf.getCurrentSession().getTransaction().isActive()) {
log.debug(“Trying to rollback database transaction after exception”);
Factory.rollbackTransaction();
}
} catch (Throwable rbEx) {
log.error(“Could not rollback transaction after exception!”,
rbEx);
}

// Let others handle it… maybe another interceptor for exceptions?
throw new ServletException(ex);
}
}

public void init(FilterConfig filterConfig) throws ServletException {
log.debug(“Initializing filter…”);
log.debug(“Obtaining SessionFactory from static HibernateUtil singleton”);
sf = Factory.getFactory();
}
}

It is very simple, it just opens up a transaction for each incoming request and either commits or rolls it back when all is finished.  The fact that it is a filter seems to be a key, I wrote a similar servlet that did not have the same effect.

DWR Converters

DWR comes with a number of converters that work really nicely.  The most interesting one happens to be the Hibernate3 converter.  It does a decent job.  There is one major issue with the converter though for my unique situation.  I have a number of data objects with properties that I do not want passed to the client.  For instance, any password property (for obvious reasons), image / blob properties etc.  DWR has a nice feature that you can use to exclude properties so they are not delivered to the client.  To use this feature you simply add the following to your objects convert node in dwr.xml:

<param name=”exclude” value=”properties, to, exclude” />

The only issue with this feature is it also excludes the property when transmitting data UP to the server.  So if you need to update one of these fields it is ignored on the way in.  Also, when you send the object to the server, DWR’s Hibernate3 converter starts with a fresh object.  This means, your properties that are excluded will be wiped out.  These are pretty serious problems that were not very easy to solve. The solution was to come up with my own converter which I named the H3SmartBeanConverter. This was adapted from a combination of the H3BeanConverter and the BasicObjectConverter which are a part of DWR.  You can download it below.

The first thing I had to change was the way the excludes are handled.  If the converter is performing an inbound conversion, all properties are converted regardless of the exclude rules:

// Access rules mean we might not want to do this one
// only check exclude rules when creating an outbound object, if write is required, allow everything to go through
if (!isAllowedByIncludeExcludeRules(name) && !writeRequired) {
continue;
}

I also needed lazy loaded properties to be initialized and passed down to the client as long as the property is not excluded.  To accomplish this I modified the section of code dealing with hibernate lazy properties:

if (readRequired) {
// This might be a lazy-collection so we need to double check
Object retval = method.invoke(example, new Object[] {});
if (!Hibernate.isInitialized(retval)) {
Hibernate.initialize(retval);
}
}

The final piece of the converter was the piece that fixes the problem of excluded properties being wiped out.  In the convertInbound method of the converter, I check to see if the object being converted extends my data object class.  If it does I load the object from the database prior to loading properties:

// if the bean is a data object, first load it from the database.  This will prevent properties that are not passed to the client from being deleted.
if (DataObject.class.isAssignableFrom(beanType)) {
// get the id field
String rawID = (String)tokens.get(“id”);
String[] split = ParseUtil.splitInbound(rawID);
String splitValue = split[LocalUtil.INBOUND_INDEX_VALUE];
String splitType = split[LocalUtil.INBOUND_INDEX_TYPE];

InboundVariable nested = new InboundVariable(iv.getLookup(), null, splitType, splitValue);
TypeHintContext incc = createTypeHintContext(inctx, (Property)properties.get(“id”));
Integer id = (Integer)converterManager.convertInbound(((Property)properties.get(“id”)).getPropertyType(), nested, inctx, incc);

if (id > 0) {
bean = Factory.get(beanType, id);
}
}

The Factory class is a basic wrapper for the Hibernate session object.  Factory.get calls Session.get under the covers.  Now, pre-loading the object from the database created yet another issue.  I was getting the error: “A collection with cascade=”alldeleteorphan” was no longer referenced by the owning entity.”  The problem here was once the object was loaded from the database, the collection properties were being overwritten with regular collection objects.  Hibernate uses specialized collection classes to handle persistence.  So, I had to add a bit of code to iterate through the collections instead of overwriting them:

// handle collections
if (Collection.class.isAssignableFrom(propType)) {
Collection collection = (Collection)property.getValue(bean);
collection.clear();

for (Object obj : (Collection)output) {
collection.add(obj);
}
} else if (Map.class.isAssignableFrom(propType)) {
Map map = (Map)property.getValue(bean);
map.clear();

for (Object obj : ((Map)output).entrySet()) {
Map.Entry mapEntry = (Map.Entry)obj;

map.put(mapEntry.getKey(), mapEntry.getValue());
}
} else {
property.setValue(bean, output);
}

Depending on which collections you are using you may need to add more code to handle them.  Finally, I had a working solution to my two biggest problems with DWR and Hibernate.  So far this solution has worked out really well, I have not run into any other issues.  If anyone has any suggestions as to how to do this in a simpler fashion, I’m all ears.

Download Java Classes:

H3SmartBeanConverter.java

OpenSessionInViewFilter.java

Well, I spent quite a bit of time trying to figure out what was going on with IE this time…I use Dojo 1.1 for the application I’m building at work, and with their theming system for Dijit, they have a tundra.commented.css file, which includes each individual widget’s corresponding CSS file via an @import statement.

Well, I thought, “Hey, this is pretty cool and easy to manage, so I’ll do it this way for my widgets as well for development purposes (we compile our CSS into separate, compressed files, afterward)…Later on that day, QA came to me and said “Hey, this thing here isn’t showing up!” and “I can’t see this dialog anymore, do you know where it went?!” Sure enough, I went over to look at my QA colleagues’ screen, and things looked all mangled and wrong…But only in IE! What the heck could be going on?!

Come to find out, after using the awesome IE Development Toolbar (yes, I’m being sarcastic), the styles weren’t even loaded into the browser!! Well, WTF?! So I cleared my cache, deleted all cookies, etc., just to make damn sure nothing funky there was happening…(refresh)…Same thing, again.

After a while of fussing with style sheets, checking other browsers, getting the generic -218760-whatever error error code, etc. I finally decided to include them all via <link> tags. Now a new error, only, a run-time error this time! “Invalid Argument.” and then..BAM! a second error! “There is not enough free memory to perform this operation.” Again…WTF?!

So, at this point, I was desperate and decided to try including them via JavaScript. What’s that syntax again? document.createStylesheet? Hmmm…Better go look it up…”Whoa, what in the name is this?! You can only create 31 stylesheets with document.createStylesheet, but yet can add as many as you want if you do a document.createElement(“STYLE”) and append it to the DOM?! Hmmmmm….”

So now, I was curious, and imported only 31 stylesheets for widgets that I knew would show right away…Sure enough, everything I specified, show up perfectly! Then, to take it a step further, I decided to split out the master CSS file with all of the @imports, into 2 files, and then import those…Amazingly, it worked!

So there you have it, the answer to fixing the extremely ridiculous, and hardly documented, IE CSS @import issue.

Lessons learned

  • IE seems to support only 31 @import statements per CSS-file, <link></link> tags on a given page, or creations of stylesheets via document.createStylesheet
  • You may noticed styles start getting ‘lost’ or ‘messed-up’, or you may get the errors “Invalid argument.” or “There is not enough free memory to perform this operation.” when this limit is reached.
  • To get around this limitation…
    • Split your @imports into 2 or more files, and then load those files.
    • Use document.createElement(“style”) statements and append those items to the HEAD element
  • IE most likely uses the same internal methods for @imports and/or <link></link> tags, that document.createStylesheet uses, due to this 31-limit.

Resources

createStyleSheet Method – http://msdn.microsoft.com/en-us/library/ms531194(VS.85).aspx

Recently I was tasked with helping to integrate Sitecore into my company’s website. Currently we are using Microsoft CMS 2002, which for the most part works well. It does exactly what we need it to at this point. The problem is that it is so old, I believe the support is being dropped. Our lead content developer went through a long process of trying to find the right product for us. She finally came to the conclusion that Sitecore would work best for us.

Our website is written in 100% ASP.NET 2.0 with C#. Sitecore is an all .NET CMS, so it makes sense. We were able to get our hands on a trial product so that we could begin work on a POC. After having gone through the Sitecore documentation we realized that the way they want website setup really doesn’t play nicely with our project. They want their site to be the root, and all of your sites files to be a subfolder. This is not very practical for us because we run multiple sites on one code platform. Also, in development we generally are working on multiple branches. The sitecore files are around 300MB, and with multiple sites and branches, copying that amount of data around wouldn’t be practical.

So, a coworker and I tried many different scenarios and finally came up with a pretty good solution. What we originally had in mind was to just reference the Sitecore assemblies and set up all the configuration for Sitecore in our web.config. After we did that we ran into a couple of snags. The first thing is, even just to run their HTTPModule, you need to give their assemblies access to the sitecore install/sitecore directory. So, we created a virtual directory that pointed to sitecore install/sitecore. This virtual directory must be just a plain old virtual directory, not an application.

One issue is you need all of the assemblies from Sitecore’s bin directory, not just the Sitecore assemblies. Also, there are a couple of other required directories. Below you can see what is required, sitecore folder is a virtual directory. All the other directories were copied from the sitecore install.

/YourWebSite/

sitecore (vd)

App_Config

data

sitecore modules

temp

Another thing worth mentioning is that the Sitecore installer doesn’t set up the configuration for your database even if you specify your database information. You have to go into the App_Config/SqlServer/Connections.config file and manually specify your connection information. In your web.config you must also make sure the connections node’s serverMode attribute is set to blank. This node can be found under the sitecore node.

In the web.config there are also a bunch of folder paths that will need to be updated. For instance, <sc.variable name=”dataFolder” value=”../data” />. Make sure these variables map to the correct paths.

After all of that, the site runs pretty well with Sitecore, and we didn’t have to change the structure of our project.

After hours of research and finding that no one else had experienced this issue, at least my form of this problem, I finally found a post somewhere. I had created a page with a single button on it and was able to click and the page would postback successfully. All is good. I had a number of textboxes on the page that were required and some that needed additional validation as well. So I threw on my RequiredFieldValidators and RegularExpressionValidators, the validators worked correct, messaging appearing correctly when data not present and disappearing when data was valid. When I clicked the submit button NOTHING happened. So I removed all of the validators and the page worked again, added one validator and posting was not possible again. The issue is that I had placed a script reference with a self closing tag. So something like this <script type=”text/javascript” src=”/js/somefile.js”/>, for whatever reason that only God can explain, this upsets the Page_ClientValidate function. Conclusion, throw out XHTML when including script files and make sure you have a closing tag <script type=”text/javascript” src=”/js/somefile.js”></script>.

Thank you again Microsoft

You would think a framework like ASP.NET, as big and popular as it is, would get the little things right.  It’s so simple, all I want to do is put an anchor around a button with an onclick.  Unfortunately, the anchor tag needs to be a server control because the onclick is generated dynamically on the server.  For whatever reason, .NET adds the path to the user control to the anchor tag’s href (if you don’t enter an absolute path yourself).  So for instance, all I wanted was #search for an anchor link.  .NET outputs it like so: http://serverpath/usercontrolpath/#search.  So ridiculous, I googled around and didn’t find much.  I ended up having to put the absolute path to the page and then the anchor.  Again, it’s so simple yet they (again) didn’t get it right.

It is very important that the Development Environment a programmer uses in his daily duties is as pain free as possible. The more irritating an IDE is, the more you’ll hate all the work you do with it. I have found that Visual Studio is a big pain to work with. Since I have started using Eclipse on Linux I have seen that not all IDEs are as painful as visual studio. I am going to discuss in detail my comparison of Eclipse and Visual Studio.  Below is a detailed list of the categories I think are important in the overall experience of an IDE.

Install / Setup / Plugins

Installation may not seem like an important thing to discuss about an IDE. You are probably thinking: “Meh, I only have to do it once.” It is still a part of the overall experience, and you’ll see why I decided to discuss this momentarily.

First, I will discuss the Visual Studio install. As far as Microsoft installers go, this one is very straightforward and simple. As a matter of fact, random people talking about how awesome Visual Studio makes their lives, it’s actually a very good installer. My only gripe with it is the fact that it takes so long. Even on a high end machine it will take at least an half hour, maybe an hour. That’s only the initial install. After you’re done installing that, you then need to pull down the 400MB SP1. SP1 takes more than an hour to install, even longer if you have Team Suite. Not to mention the fact that the service pack doesn’t seem to do much. I didn’t notice any improvement over the vanilla version.

Next we’ll talk about the eclipse installer. Oh, wait, there isn’t one! If you are an Ubuntu / Debian user, (or you have a package manager like yum), you can install through your package manager. The problem with that is the repositories contain an older version and most plugins require the newest version. So, I’d recommend “installing” manually. I say “installing” because all you have to do is download a tar file and extract it. Oh yeah, the whole thing is like a 60MB download. I think Visual Studio starts at around 2GB. Once that is done, you can find all the plugins you want using the update manager (help -> software updates -> find and install). Even most third party plugins have their own update site. All you have to do is grab the URL, paste it in and bam! You can install the plugin. It is even very good at managing dependencies.

So, to summarize, the install process for Eclipse is obviously a lot quicker. Also, because everything you need for the IDE exists in one folder, you can easily back up your IDE by archiving it and throwing it on a pen drive or something. That way, if you ever manage to screw it up, all you need to do is delete your messed up folder, and paste the old one back in. Just remember to never store your source files in the Eclipse folder.

II. Performance

Performance is vastly important when it comes to an IDE. Because we developers spend so much time in the IDE, performance issues can really get under our skin. Visual Studio is absolutely dog slow. Most of the projects I work on are pretty big (the one I’m on now has about 17 projects in the solution). However, they’re not so big that you would expect anything to slow down. I worked on one solution that had 75 solutions in it, and it was absolutely unusable. Now, that solution had no business having 75 projects, I will admit that.

We use Team Foundation at work, which is okay. Just like any other Microsoft product, it leaves something to be desired, but it works. It’s actually really quick. Getting the latest version is very fast (although sometimes doesn’t actually get the latest version of the files you select, go figure). However, when everything is done downloading from the server, projects (and sometimes the solution) have to be reloaded. When many projects are updated, this can take well over a minute.

Here’s my biggest issue with Visual Studio, right here. Opening a markup file such as an .aspx or .ascx file, is retardedly slow. It can hang anywhere from 20 seconds to a minute. Absolutely unacceptable, it really is. When I’m dealing with a bunch of markup all day long, the last thing I want to do is sit around and wait while Visual Studio does god knows what “behind” the scenes. Oh yeah, again, Microsoft, what ever happened to multi-threading? Not only that, it doesn’t even show some sort of messaging to let me know it’s doing something, it just hangs. Classic Microsoft.

Now on to Eclipse. In general Eclipse is very very quick. Start up of the IDE is similar to Visual Studio I think. However, once in the IDE there’s no comparison. Eclipse is a lot faster. Loading an html file takes zero time. The WTP plugin for Eclipse even runs through all of your files and validates the HTML / JavaScript / CSS. Oh yeah, and it uses something called a background thread, fancy that. Not only that, it shows a message at the bottom with a progress bar letting you know it’s validating. This is something Eclipse does every time it’s doing anything that may interrupt you. It will tell you what it’s doing and usually give you a pretty accurate progress bar.

I admit I have seen eclipse freeze up at times, but usually it’s because I did something stupid. There are very rarely any hangs of any kind, unlike Visual Studio which hangs very predictably. So, Eclipse wins this round as well.

III. Customization

Customization is very important because, I don’t know about you guys, but I’m very picky about the way my IDE is organized. This is one category that I can’t complain about on the Visual Studio side. Visual Studio is pretty customizable. Microsoft made it easy to put your tool windows and whatnot wherever you want and dock them how you want. I like my left monitor to have only my editor window, and the right to have all of the tool windows such as Solution Explorer / Source Control browser etc. I like to have different layouts (or perspectives as they’re called in Eclipse) and Visual Studio kind of supports this, but it’s a hidden feature. There are a couple of macros that allow you to save your layouts, so you have to set up keyboard shortcuts to macros to make this work.

The one annoying thing is the “springy” tool windows. Especially if you hide a window like the errors window. Whenever there’s a parser error or warning the stupid thing springs out. This even happens when you’re TYPING in an html / aspx /ascx file. How can you validate if I’m still typing Microsoft?? Also, Visual Studio makes it easy to color your code the way you want to. It even throws in a nice import / export feature for all of your settings. And as a bonus a bunch of people have created color schemes for Visual Studio that are very good.

Eclipse is very similar in the way it handles customizing the UI. You can easily dock and move the tool windows anywhere you like. It also has built in and readily apparent support for perspectives. This means you can create a layout you like for one language, and save it, and then switch to another project and use another layout very easily. It comes with some basic built in perspectives that are good. Also, when you install new plugins a lot of them come with their own perspectives.

Eclipse also has support for importing / exporting color schemes. Which is nice, but nobody has created any color schemes (that I can find).

IV. Source Control / Team

Source control, very very important. Every developer knows this. And just as important as keeping your source uh, safe, heh, is the manner in which the source control plugin for your IDE works.

Like I said before, Team Foundation is pretty good. It’s definitely a lot better than Visual Source Safe. I guess I shouldn’t say pretty good, I should say better than Source Safe. But, I think a file share is almost better than Source Safe, so… The biggest problem with TFS is the way it handles workspaces / getting latest. I don’t know why this happens, but sometimes you get latest version, and it skips files or something. Because someone will fix something, I’ll get the latest source on the entire solution, and it doesn’t really get the new file. I have to do “Get specific version” and force a get on all files. I don’t understand why this happens, it’s so simple, download the mfing file. Also, in TFS there is no easy way to find a changeset. You have to go to “Get specific version” pick by change set then click find. Once you do that there’s a great search tool, but it’s just so annoying to get to.

I almost forgot about the worst part of TFS, conflict management. If you can call it that. A coworker literally showed me a conflict TFS complained about today where he had added a newline before an ending curly brace and TFS didn’t know what to do. Simply stunning. Conflicts are seemingly random, some things it knows how to merge perfectly, others it just runs away screaming. “WHAT DO I DO??? YOU ADDED A SPACE ON A LINE SOMEONE ELSE CHANGED!!!!!!” Seriously, I don’t know how they managed to make such a terrible conflict management system.

The rest of Team Foundation is okay. It’s pretty straightforward to add and manage tasks / bugs. There are a few quirks (the worst being this one). For the most part it works really well. It deals with links very oddly. When you click a link it launches IE inside of Visual Studio, regardless of your default browser setting. When you try and select a link to copy it, it opens in IE inside of Visual Studio, awesome.

Again, Eclipse wins. I use the Subclipse plugin, which is a plugin for Subversion. Subversion is very good, it’s fast, it’s simple it’s free. It can do everything Team Foundation source control can do it does it better. It’s really easy to get a project and to add a project to the repository. You can deal with multiple repositories at a time (unlike TFS). When you to an update (get latest in M$ speak) it actually gets the latest version. There’s an on the fly conflict view that you can have a look at that shows you where differences are between your file versions and the servers (TFS has nothing like this, and it’s very handy). I have never seen a conflict in Subversion while using Subclipse, I’m sure they happen, but the project I’m working on only has two active developers at the moment. So, I can’t really tell you how good or bad Subversion / Subclipse is at dealing with conflicts, but the diff tool is very good!

V. Integration

Well, Visual Studio’s section on this is pretty simple. If it isn’t Microsoft, it’s not in Visual Studio. Obviously, everything you do in Visual Studio involves .NET, or SQL Server, or MS C++.

This is where Eclipse really shines. I have used plugins for C++, the Web (html, css, JavaScript, DOJO and more), Python and PHP. They are all very good, easy to install, and easy to use. Check it out, you can even get a C# plugin for eclipse. Add another win for Eclipse.

VI. Conclusion

Well, there you have it. Eclipse is the winner. In pretty much every area I can think of, Eclipse beats out Visual Studio. If you’re doing .NET you’re pretty much stuck with Visual Studio. If you can settle for Mono, Monodevelop has come a long way. I am actually trying to get away from .NET and do some open source work since I have started to use Linux so much. It’s nice to build applications that run on more than one OS. Also, I do a lot of web work and I have discovered that ASP.NET definitely isn’t the best tool for the job. Yes, C# is probably just about the easiest language there is (Visual Basic doesn’t count, it’s not a REAL language), and the .NET Framework is very well organized, but even still there are a million ways to accomplish your programming goals. As far as development environments go, Visual Studio is definitely not the best.

Don’t know exactly how I came across this but I did. There is a Subversion plug-in for Visual Studio that Microsoft recommends, seen here http://msdn.microsoft.com/msdnmag/issues/08/LA/Toolbox.

My first thought was, well that’s good. A plug-in for Visual Studio for open-source, free source control that Microsoft is advertising, this can’t be all bad. Boy was I wrong. Subversion may be free, but the plug-in isn’t. The plug-in, called VisualSVN, cost $49….per license. Now I’m not saying it’s all Microsoft’s fault. They didn’t write the plug-in, but come on. How can someone write a plug-in for something that is free and charge for it. It figures that Microsoft would back this instead of something like ankhsvn, which is also open-source and, are you ready for this, free.

Figures that this would come from something on MSDN, the most useless resource for everything worthless.