Tuesday, April 20, 2010

Getting New Code Into SVN From Visual Studio

I'm currently using SVN as source control. I use Visual SVN integrated with Visual studio, which I think is awesome. I used to work at a place that used command-line CVS to control source - yikes. Nothing like crazy long command-line arguments to make source control hard.

But anyway, to the task of adding new code to the repository. I'm sure there's many ways. But here's mine. FYI, when I was setting this up, I was told to install Tortoise SVN and Visual SVN server/client. I'm not sure if that's necessary, but that's what I did and it works.

1) On the PC with the Visual SVN server (could be your local one, or a remote one. I like remote ones in case something happens to your local one), create a new folder in SVN's repository directory.
2) Right-click on that folder, go to tortoise SVN, and select "create repository here." It'll go ahead and do some stuff (add directories, files).
3) Then back in Visual Studio, I right-clicked on the project and clicked "add to subversion." I chose to do this in an existing repository, and selected the one I just created.
4) Then just keep clicking through the dialog boxes until you're done. Don't forget to commit your code after adding, too.

This all assumes you have the SVN repository set up, and you have Visual SVN already setup and integrated with Visual Studio.

Visual Studio Installer Error

I've created a Windows Service (in C#) using Visual Studio, and also created an installer (.msi file), but was getting several errors when trying to run the installer. They didn't tell me what the error was, but kind of pointed to it. Here they are, so hopefully some Googler some day will find them and it will help solve their problem. My solution is after that.

One error message: "...does not exist. If this parameter is used as an installer option, the format must be /key=[value]"

Another error message: "Error 1001. Exception occurred while initializing the installation: System.IO.FileNotFoundException: Could not load file or assembly 'file:///C:\windows\system32\Files\HP\MirthDirWatchSetup\' or one of its dependencies. The system cannot find the file specified."

Inside the custom actions for the installer, I had CustomActionData set to the following: /Param1="[EDITA1]" /Param2="[EDITA2]" /targetdir="[TARGETDIR]\".

Note 1: If any of the parameters (including TARGETDIR) has spaces, you need those quotes.

Note 2: If any of these parameters are PATHS - which TARGETDIR is - you need that slash after the closing bracket.

My problem was that Param1 was also a path, and I didn't know that ALL paths need that slash after the bracket. I had thought it was just TARGETDIR. So the lesson learned here is that when you're creating parameters, if any of them are a path, you need that trailing slash. This may apply to any parameter that has a slash in it, too, but I'm not going to test it now. I just spent all day figuring this out. I've put enough of the keywords in here; here's to hoping the next person with this problem finds this post!

Wednesday, April 14, 2010

Development Speed vs. Maintainability

I've researched programming methodologies, and most of them seem to fall into two major groups: Agile and Waterfall. Waterfall is stereotypically slow, rigid, and exacting, where as Agile is faster and more flexible. Sometimes you have to program for speed - if, for business reasons (beating the competition to market, high workload, poor planning), you have a short deadline.

So how? How is your code, or your program different when you have to program for speed? Or is there even such a thing as programming quickly? I've definitely learned by now that I can't "code faster" - even the novice programmer knows we're not shoveling dirt here - we're solving problems.

Focus, for me, is the only thing that can speed up a coding effort. Notice I didn't say "development effort." Development encompasses much more than writing code. If given a coding task, intense focus and concentration puts me in the zone, where algorithms, codes, and bug fixes roll out smoothly.

But to speed up a development effort? Division of labor comes to mind first. A technical specification has to be drawn up, which divides the work into procedures and functions with inputs, outputs, pre-conditions, and post-conditions (yep, all that stuff you learned in programming 101!). Then you split it up, let the coders code, and in theory you'll just connect the parts later. Provided the tech spec is clear enough, well-thought out enough, and each coder tests their individual unit well enough, you'll quickly have a nice piece of software.

Or do you just start coding? Is putting code to the screen really the fastest way to develop software? It can be fun - who doesn't like to just start pumping out code, designing be damned. But for those who have taken this route, the inefficiencies show up later as the code base gets larger, and/or new features get added. At those points the design flaws become glaring, which is to be expected, because all you did was start coding without giving much thought to design.

Yes, you can still just plug along, hacking, patching, and adding until it does what you want. But good Lord, I feel for the poor soul who has to maintain and upgrade that software. I had the privilege/burden of working on a large piece of software that had been patched, upgraded, and customized by hundreds of developers over 15 years, with little management and consistency. Making changes to that software was not for the faint of heart. Even a seemingly innocuous change ended up causing a bizarre error in some other module of the software (I realize by definition modules shouldn't depend on one another, but they did!).

When I am afforded the time to actually design a piece of software, the end result is a logically organized, easily maintanable and upgradeable product. Naming conventions are consistent, and code/functions/files grouped properly and logically. Usually, though, my projects fall somewhere in between. You can see the roots of a logical design, alongside the hacking and patching that became necessary for one reason or another.

My question to the development world - what do you do when you need to code, or develop quickly? Do you just code hapazardly, ending up with more bugs or uglier code? Or can you develop quickly and still follow a good design, with good standards? If I need to pump out functions in a hurry, I don't look to see if everything is aligned properly and the names are consistent and make sense - I just put code to screen.

I can already hear the arguments against this approach, and I can't say I blame them. The alternative is to tell my customer/client/manager that whatever they want can't be done. But what I'd really mean is "what you want can't be done how I prefer to do it, but it can be done".

Tuesday, April 13, 2010

Using Globals - I Didn't Think It Could Happen To Me

Well it can. And it did. I used globals. And it blew up in my face.

I'm writing a web application with a lot of Javascript. Javascript is a bit of a strange animal because of how "library" files are included individually, and the order in which you include them can affect the scope of some variables (by scope, I of course mean where the variables can/can't be seen and used). Please correct me if I'm wrong.

So I needed a variable that would be initialized when the web application loads, and I need to use this variable in many many areas throughout the code. Making it global was the easy, easy solution. The only alternative I saw? Passing this variable in and out of many functions, which I foresaw creating spaghetti-like scope issues and extra time debugging. So, globals it was.

And maybe when it's a small application, which this is moderately small (but growing all the time), you can get away with one or more globals. Or if you're the only developer, and you've designed all the code - you know how/when/where (you hope) the variables get used, so you can throw in a global or two.

But for this particular task, I was farming out some of the development to two other developers, who due to their location, I have limited communication with. And here's where it happens: one of the other developers used a function I wrote in an unexpected way. A smart way - good for him for not re-writing a bunch of code to achieve something current code already did, however "hacky" it was (and I think we must accept some level of hackiness with applications. You can't redesign every time there's a quirk). AND, the function which was used unexpectedly, was also a function that modified the value of the global.

BUSTED.

I began seeing some strange errors and realized that I was over-writing some of the data I was saving to the database. I finally traced it back to this problem, and realized I had just committed one of the major programming sins. Well boo me, but globals still seemed like a better idea than passing the variable around to tons of functions. What do you think? (No seriously, what do you think? I write these posts because I'm interested in takes from other developers. I might learn something!!).

Monday, April 12, 2010

Stereotypes, Part 3 of 3 (Bridging the Gap)

I get the impression that most non-IT people or non-developers (developers/programmers are a subset of the IT industry, for those of you not familiar. Here's another stereotype: if you're an IT person, you know it all - networking, security, development) think of IT people as being from a foreign country (metaphorically - I know literally that is sometimes the case), or even a foreign planet. I've worked on projects where the customer has repeatedly reminded me "I know we speak different languages, but...". I do realize, of course, this customer was trying to bridge the gap. Which is great, and what is needed to integrate IT in general into business operations.

But the truth is we do speak the same language. Not all IT people lack communication skills, don't change their clothing, or fail to wear deoderant (but good Lord - I've worked with enough people who don't do these things. For the love of God, WEAR DEODERANT AND CHANGE YOUR CLOTHING!!). It's up to us, the IT people to take the initiative in bridging that gap. How to bridge this gap?

Mostly social skills and communication. By nature, programmers and developers often work in solitary environments. In fact, preference for a solitary environment is often how one ends up being a software developer. Let's face it, math whizzes in elementary school don't often end up as the most popular kid in class (except when it comes time to take the test or hand in the homework). It makes sense that if one spends more time in a solitary environment, social and communication skills will lack.

So this is why those deficient in these skills, in order to bridge the gap and help to integrate IT further into the business landscape, must take initiative to improve these things. When we are able to communicate (that's talking and listening) clearly and stay on the same page as our non-IT co-workers/supervisors - we begin relating to them. When we relate to them, they get to know us - and they stop thinking we're aliens who speak strange languages and with strange customs. At some point, after we take these steps, IT ceases to be an island or peninsula of strange people who may or may not be difficult to work with.

Two other items that will help bridge the gap: leave your ego at the door and shower approximately once a day - if hygiene or attitude is an issue, then address these as your first steps. You won't have to worry much about communication if you smell bad or think too highly of yourself. Perhaps this last paragraph is the most important one of the whole post.

Stereotypes, Part 2 of 3

Finally, there is the last stereotype (heck, I'm sure there's more - these are just the ones I had) of the smart, good-looking, well-dressed IT man/woman with rectangular glasses, good temperment, and well-developed social skills. Any one of these may or may not be true about an IT person, but this particular person who is portrayed so often on TV (especially in commercials) is rare.

Usually, in shows like Laws & Order SVU (that guy who looks like BD Wong whose name I haven't learned yet), these techie people know all, and can give you any answer you need about anything quickly and fluently. If not an immediate answer, they will pull up a computer program that has exactly the information needed exactly when needed - with a really neat user interface. This, friends, is TV at its finest.

My experience tells me that most IT concepts and project updates, when they need to be communicated, involve much more than a few pointed or witty sentences. Usually, it means some back-and-forth, some re-explanations, some errors, and occasionally some drawings on a whiteboard. Even for a software developer who communicates fluently, if you are managing multiple projects and must answer questions on the spot about details on one particular project, there will be some stammering and follow-up questions before the information is communicated.

I have also seen (and felt) the need for IT people and software developers to emulate this stereotype. They (and I have, but am trying not to) feel pressure to provide short, concise answers to managers and stake-holders in "TV style". But life and communication don't work that way. Communication is an art, not a hard science (haha - which is maybe why some programmers struggle with it). What makes it worse are those in management who expect the short, concise, exacting answer - this makes communication of information harder. I've seen developers leave out information because it didn't fit into the nice, concise answer management expects. This is also due in part to management's probably not wanting to hear added information. Truthfully, some non-technical IT managers only want to hear "yes, everything is good." Even if you mention issues with projects, the conversation is sometimes steered until you finally have to say "yes, everything is good" and smile.

But I digress. My point is, snappy witty IT guy with all the right answers immediately does not exist. Situations are usually more complicated than can be described with a few short sentences. If as a developer, you are familiar with this, or feel this pressure - relax. Simply focus on what information is important, then communicate that in a way the other person can understand. Eventually you'll get good at it, and be closer than ever to communicating as clearly and consicely as those people on TV whose lines are scripted and practiced.

Stereotypes, Part 1 of 3

For better or worse, stereotypes exist about programmers, developers, and/or IT people in general. I've had many myself. In fact, for many foolish years, I believed that in order to be a high-quality IT person, I had to somehow morph into those stereotypes. Ahh, the funny things we believe sometimes.

I spent lots of time cramming languages and projects into my head and onto my computer in order to obtain a large skill set. Anytime I read about a technology that I wasn't familiar with, I immediately felt behind the curve and somehow inferior as a developer. Finally, thankfully, I learned to accept what I don't know and to build my knowledge gradually.

I fell for one of the common stereotypes in the developer's world, one know as "the Guru." The Guru is an IT know-it-all; you mention the technology or programming language, the Guru will know what it is, the pros, the cons, and probably have experience in that language. And to top it off, the Guru was superior and the non-Gurus were inferior. It was basically a pecking order in the IT world, with Guru being at the top. Strange? Yes. With a little more maturity, I realize that this stereotype is just fantasy. Just another manifestation of my own personal feelings of inferiority as a professional. The only things that keep this stereotype alive are, in fact, feelings of inferiority - the non-Gurus who feel inferior, and the Gurus who feel inferior and act superior to cover it. It's a self-feeding system.

Now, there are super-talented people who have a vast amount of experience and really can tell you at least something about many different technologies. Sometimes it feels like those people are the rule, rather than the exception. It can get discouraging. But, having had the opportunity to work with people like this, it is a wonderful opportunity to learn a lot in a short amount of time. I'd recommend developing a good relationship with this person, if possible.

I'm pretty sure a small book can be written on the subject of stereotypes. The programmer as a loner? Yes, it's out there. I'm part loner myself. I find it very satisfying to get one or several days in a row of "pure programming", as I call it - where I design and program the entire day, with few interruptions and no meetings. I wouldn't want this 8/5/52 (8 hours a day, 5 days a week, 52 days a year) - but in moderation, I love it.

But the programmer as an alien? Haha - in some cases, it may seem that way. There are a group of highly intelligent people who lack social skills, and for those people, computer work and programming are ideal. It allows them to exercise their creativity and intelligence, while minimizing social interaction. This kind of person can be very happy in this situation. It's the poor communication and social skills that lead non-IT people to believe this person is "weird", or an alien. But the truth is, not really - it's just a matter of social skills and experience. I can promise you these "strange" IT people are more or less like the majority. For those that are quirky and are able to express that - I say good for you. The "majority" of people hide their quirks - which in a lot of cases, are the great things that make each individual unique.

Friday, April 9, 2010

Requirements Gathering and Assumptions

Those of us with experience gathering software requirements from a customer/client, then translating that to a design and a finished product, know full well how important it is to gather and define requirements carefully. The requirements dictated by a customer/client translate directly to how you design the user interface, the underlying data structures, and the database tables. When requirements are misunderstood, or even changed, the effect on your design could be anywhere from mostly harmless to entirely harmful. On the painful end of the spectrum, this leads to redesigns which cost time and money now, or hacks of your original design which costs time and money later in maintenance and upgrading.

During a recent project, I was asked/expected to begin development of a project prior to getting the full requirements. We had partial requirements, and I wrongly assumed that the requirements-to-be-named-later would be follow the same model/outline of these early requirements. So, resting on that assumption, I felt it was safe to go ahead and do this.

What I didn't do was communicate to the customer that I was expecting the next set of requirements to follow the same model of the previous requirements. I just assumed this is what we agreed on, so I didn't make clear that my design now depended upon that fact (which was only a fact in my eyes - so that makes it an assumption). Ouch.

As the later requirements rolled in, they were (of course) not like the earlier requirements. My design was cracking. Hacks were being added. The logical placement and design of code was getting looser. I cringed the whole time, but in the face of a deadline, had to keep going. No time for a redesign at that point. I lost time figuring out the hacks, and developers later will lose time understanding what is needed to modify the code and debugging it. But a redesign would've cost more time, and there would be no guarantee that more requirements would make further alterations necessary. So hacking it was.

It's not the customer's fault. The customer isn't the one experienced in the software process. The customer doesn't know that I'm making that assumption - he or she doesn't have the software knowledge to know that these details matter. I find that mostly, for non-software professionals, it's all kind of mysterious. A black box, really.

The lesson here is the assumption. I didn't recognize it, and it cost me. I've written customer quotes before with a full page of assumptions written right into the quote, but this unfortunately did not happen this time. One question to ask yourself as you are designing software is, "what assumptions am I making about the requirements?". Had I asked myself this question, I may have realized that I assumed all requirements would follow the same model, then confirmed that with the customer before finishing the design and writing the code.

There's plenty of research out there on how much time and money mistakes cost depending upon where the are found in the project life cycle. The earlier they're found, the easier it is to fix. The later they are found, the more costly and time-consuming they are to fix. Luckily for me, this project was on a smaller scale, so not much harm as been done as far as clock-time. If I looked at the delays as a percentage of total project time, I wouldn't be to happy - something like a 5-10% delay. But it's a good lesson to learn without much harm done.

Programming Fun vs. Getting It Done

I'm pretty sure that a lot of programmers are perfectionists. Software is perfect for that - because you can change, change, change things until they work exactly the way you want. In that regard, it also caters to control freaks. Just as games like Civilization (of which I'm a recovering addict) can also cater to control freaks.

These two activities are fun because it allows the control freaks to indulge themselves. A solitary programming project lets the programmer exercise great control over the code, letting the programmer express themself and produce code that looks and runs exactly how they want it. This manifests itself most for me during a "spare-time" programming project, with no deadline and no stake-holders. This is when programming is the most fun for me - it's carte blanche on indulging my whims for the creation and functionality of software. And again, it is the same for some building and strategy games like Civilization. You make the calls - you set up the empire - you decide what goes and what stays (should your empire be strong enough for that).

Ahh, but in the real world - projects need to be completed. The truth is that there are always, always things you can do to improve software projects. In my own personal projects, I indulge myself and go back and re-do/over-do whatever items I think should be better - more effecient, more usuable, however. But during my 9-5, you have to make progress. You have timelines, milestones, and managers wanting to know if you're meeting them.

Of course, as a programmer, you still must write quality code. Good naming conventions, effecient algorithms and use of memory, readable, and logically organized code is important. And like writing an essay - you can write a first draft of your code, then go back and revise it, then go back and revise it some more - endlessly. But at some point, you have to draw the line. You have stakeholders - management and clients - that want the finished product.

Therein lies the challenge - where do you draw the line? At what point is your code, or your user interface, "good enough"? I suppose that's a personal preference, tempered with concerns from management and stakeholders. If given the time, I think most conscientious developers would go on for quite a while improving their own code. I can tell you that as you get more experience, writing quality code the first time gets easier and easier. So the desire or need to go back and revise your code lessens over time.

But here at 28, as a web applications developer - where do I draw the line? That's a big fat "it depends." In one project, the deadline was extremely tight - and we took terms like "Agile Development" and "Rapid Application Development" to a whole new level. For this project, sadly, quality went by the wayside. Thankfully, there was little in the way of processing logic, so not too much damage done. Most of the code just grabbed data from a form and stored it in a database (and back the other way, too). I'm pretty sure all developers frown at that, and so did I.

In another project, I had lighter timeframes - so the line could be drawn more in favor of quality code. While we did fail to get all (or even most) of the requirements prior to starting the project, which has caused me to have to hack my own design in order to finish by the deadline, there was still a mostly unified, underlying design that I was allowed to create, improve, and implement. So this code will be more easily understood, upgraded and maintained in the future.

As for you ... where do you draw the line?

Thursday, April 8, 2010

Programmer? Developer? Engineer? Analyst?

I suppose this is an age-old debate; regarding job descriptions and responsibilities. One of my co-workers mentioned today how his responsibility to interact with customers and manage projects interferred with his ability to program. And yes, that makes sense, most professional programmers know that focus is key - and interruptions cost more than the actual clock-time lost for the interruption.

But what exactly is his job? And what does it entail? I can tell you I don't have an actual job description, and neither does he. The idea is to remain flexible - and that's not so bad. It has its pros and cons. Should we complain? Well that depends on what you want. Here's how I see it:

Programmer: Your job as a programmer is to program. You're given technical specifiications and you code them. No customer interaction, not a whole lot of decision-making. Not really solving problems or being creative - only when required due to insufficient specs and/or lack of communication from the creators of the specs. A "pure" programmer isn't doing much testing beyond what is done during the course of normal development (which is a topic for another day). The testing goes off to the Q/A department. So I guess we're assuming here that you're working in a company big enough to have engineers, Q/A, and programmers.

Analyst: Analyst implies you have to do some ... well ... analyzing. Probably some customer interaction here. For example, the customer may report a software issue and it's up to you to do the analysis and come up with a solution. And, most likely, you'll code that solution too. I haven't seen a place where they separate the programmer and analyst function. So in being an Analyst, this implies a broader range of responsibilities.

Engineer: Sometimes also known as an architect. These are the designers - the closest things you'll find to the computer scientists in the professional world (don't misinterpret this - I'm only implying that "true" computer scientists exist in academia). Engineers translate the business or program needs to data structures, functions, and procedures. They choose the language, platform, and whatever other technical decisions need to be made to produce a finished software product. I believe they even write the technical specifications - otherwise who would? A technical writer? It doesn't make sense to me that one person would design the software, communicate that to someone else, and that someone else would write up the technical specification. Engineers have to focus on good design principles. Following best design practices makes maintaining and upgrading software much easier in the future (also a blog post for another day).

Developer: This is the most generic term. Kind of a jack-of-all-trades, all-of-the-above kind of job description. You simply develop software - whatever that entails. For me, that entails gathering requirements, creating a design, implementing that design (with one or more developers under me), Q/A, UAT, and delivery. This is a blend of programming, customer interaction, and project management.

And ... this is what I do, and this my co-worker does. We're Software Developers. We wear whatever hat is necessary on any given day. If you enjoy pure programming - being a developer is not for you. If you enjoy working with customers and other developers - maybe a pure programming position (if they even exist - do they?) isn't for you.

This list is open to debate - and probably isn't complete. It's late and I want to go to bed. The world of being a software professional has a wide range of responsibilties, and can have a fair amount of depth. It may be different for older programmers, but this is only something I've learned through experience. Being fresh out of college (graduated 2003), I had no idea what the software industry was like. None. Haha - and maybe in another 7 years, I'll look back at this entry and think the same thing.

Electronic Forms - Inflexible?

Well, yes, they can be. The more "useful" an electronic form is with data validation, menus, and dropdown boxes, the less flexible it is. Users are forced to enter certain data in certain places. That is a rigid process. Wait, I can hear you say:

"Yes, this is what we want. This makes the data more accurate."

And yes, that's true. But the task I'm currently working on is converting a paper medical form into an electronic form - complete with data processing and storage. And the issue is that the doctor really can't describe how he uses the form. It's not rigid for him - he write one thing in column A one day, and write something else in column A another day. All he has to care about is communicating to himself and his circle of doctors. So it doesn't matter to him if he fills out the form exactly how it's liad out on paper. Just because a paper form has certain categories, names, and blanks to be filled in, doesn't mean the form actually gets used that way.

And what we're trying to do as developers is provide a tool that replicates how a form is used. So this is where the flexibility/inflexibility comes in. If the doctor enters data, and I validate the heck out of it, he or she can't use that field for anything else. Whereas on a paper form, you can use any field for anything you want.

Possible solutions? Extra programming - program in the flexibility. Make the form smart enough to know what the physician (or any user) is trying to do. But do you realize how much extra programming that will be? Nuances galore. It's a slippery slope (ha, yes - the ever-feared slippery slope of programming). Sure it's cool, and sure it can be done - but at what cost? How much extra time and creative design is needed to make a form both flexible and smart enough to validate and store the data properly? Part of the pros of data validation is that processing and handling that data afterwards is easier. So is it wise to spend the extra time?

Or, you can just make generic textboxes for data entry. You take all data, and you don't validate it.

So - as with many things - it's a balance. It's a judgement call on a per-situation, per-form basis. What kind of flexibility is needed for the data entry? If not much, then go ahead and make rigid data-validation routines. Then it's worth it. But if the form's user might sometimes enter alphabetic data, sometimes numeric, or sometimes something else (not sure what else, but I think you get the idea) - then it's advisable to scale back on the rigidity/validation. Remember that if you program in rigid validation, and you deliver the form to your customer (in my case, a doctor), and he can't enter the data he wants where he wants - he may come back and say "I need this changed." Whereas if you left it flexible in the first place, he may say, "Nice, this works how I need it."

Oh, and if you have other solutions, or think I'm wrong, or think I'm right - say so. Open it up.