azizkhani.net

I know that I know nothing

retrieve currently logged-in users using the SessionRegistry

clock December 28, 2012 17:04 by author Administrator

http://krams915.blogspot.de/2010/12/spring-security-mvc-querying.html

 

http://code.google.com/p/spring3-security-mvc-integration-tutorial/downloads/detail?name=spring-mvc.zip&can=2&q=



Roles in the IT World

clock December 4, 2012 23:03 by author Administrator

Reference: Roles in the IT World



kent :Competence = 1 / Complexity

clock December 1, 2012 23:11 by author Administrator

This was one of my most popular tweets ever:

the complexity created by a programmer is in inverse proportion to their ability to handle complexity

I wanted to follow up a little, since some of the responses suggested that I wasn't perfectly clear (in a tweet. imagine that.)

The original thought was triggered by doing a code review with a programmer who was having trouble getting his system to work. The first thing I noticed was that he clearly wasn't as skilled as the programmers I am used to working with. He had trouble articulating the purpose of his actions. He had trouble abstracting away from details.

 

He showed me some code he had written to check whether data satisfied some criterion. The function returned a float between 0.0 and 1.0 to indicate the goodness of fit. However, the only possible return values at the moment were exactly 0.0 and 1.0. I thought, "Most programmers I know would return a boolean here." And that's when it hit me.

 

This programmer's lack of skill led him to choose a more complicated solution than necessary (the code was riddled with similar choices at all scales). At the same time, his lack of skill made him less capable of handling that additional complexity. Hence the irony: a more skilled programmer would choose a simpler solution, even though he would be able to make the more complicated solution work. The programmer least likely to be able to handle the extra complexity is exactly the one most likely to create it. Seems a little unfair.

 

I'm interested in how to break this cycle, and whether it is even possible to break this cycle. I'm certain that this programmer knew about booleans. For some reason, though, he thought he would need something more complicated, or he thought he should look impressive, or he thought the extra complexity wasn't significant, or something. How can someone with ineffective beliefs about programming be helped to believe differently?

 

wooooooow wooooooow woooooooow

 



kent beck new post

clock December 1, 2012 22:58 by author Administrator

In the optimization model of software design there are one or more goals in play at any one time--reliability, performance, modifiability, and so on. Changes to the design can move the design on one or more of these dimensions. Each change requires some overhead, and so you would like few changes, but each change also entails risk, so you would like the changes to be as small as possible, but each change creates value, so you would like changes to be as big as possible. Balancing cost, risk, and progress is the art of software design.

 

If you've been reading along, you will know that my Sprinting Centipede strategy is to reduce the cost of each change as much as possible so as to enable small changes to be chained together nearly continuously. From the outside it is clear that big changes are happening, even though from the inside it's clear that no individual change is large or risky.

 

One knock on this strategy is how it deals with the situation where incremental improvement is no longer possible, where the design has reached a local maximum. For example, suppose you have squeezed all the performance you can out of a single server and you need to shard the workload. This can be a large change to the software and can't be achieved by incremental improvements.

 

It's tempting to pull out a clean white sheet of paper when faced with a local maximum and a big trough. However, the risk compounds when replacing a large amount of functionality in one go. Do we have to give up the risk management advantages of incremental change just because have painted ourselves into a (mixed) metaphorical corner?

 

The problem is worse than it seems on the surface. If we have been making little incremental changes daily for months or years, our skills at de novo development will have atrophied. Not only are we putting a big bunch of functionality into production at once, we developed that functionality at less than 100%. Bad mojo.

 

The key is being able to abandon the other half of the phrase "incremental improvement". If we are willing to mindfully practice incremental degradation, then we can escape local maxima, travel through the Valley of Despair, and climb the new Mountain of Blessedness all without abandoning the safety of small steps. Here are some examples.

 

Suppose we have a class that is awkwardly factored into methods. Say there is 100 lines of logic, the coding standards demand no more than 10 lines per function, and someone took the original 100 line function and chopped it every 10 lines (I'm not smart enough to make this stuff up). What's the best way to get to a sensible set of helper methods? Incremental improvement is hard because related computations can easily be split between functions. Incremental degradation, though, is easy (especially with the right tools): inline everything until you have one gigantic, ugly function. With the, er..., stuff all in one pile, it's relatively easy to make incremental improvements.

 

Suppose we need to switch from one data store to another. Normalization is good, right? So the incremental way to convert is to denormalize. Everywhere we write to the old store, write to the new store. Bulk migrate all the old data. Begin reading from the new store and comparing results to make sure they match. When the error rate is acceptable, stop writing to the old store and decommission.

 

The literature and tools for incremental change betray a bias towards improvement. Fowler's "Refactoring" covers extracting methods more thoroughly than inlining them. Refactoring tools often implement varieties of extract before inline. To be fair, that's the more common direction to move. However, mastering incremental design demands being equally prepared to improve or degrade the design at any time, depending on whether it's possible to incrementally improve. In fact, sometimes when I have degraded the design and discover I still can't make incremental progress, I release my inner pig and make a really big mess.

 

tl;dr If you can't make it better, make it worse.



About the author

 Welcome to this web site . This page has two purposes: Sharing information about my professional life such as articles, presentations, etc.
This website is also a place where I would like to share content I enjoy with the rest of the world. Feel free to take a look around, read my blog


Java,J2EE,Spring Framework,JQuery,

Hibernate,NoSql,Cloud,SOA,Rest WebService and Web Stack tech...

RecentPosts

Month List

Sign In