azizkhani.net

I know that I know nothing

21 signs of BAD MANAGERS

clock November 23, 2013 19:23 by author Administrator

 

 

  1. Bias against action or against planning, simply waiting or postponing for ever; embrace the status-quo
  2. Secrecy, not willing to share information. giving the feeling that having access to information is a privilege reserved to managers
  3. Working very long hours to prove hard work or hide incompetence
  4. Over-sensitivity, someone that reacts immediately but the reaction is not a real response but mostly an emotional fact
  5. Brain washed by procedures and processes; favor a process instead of getting things done
  6. Expect the people to read they minds; hand in hand with secrecy –  i keep the info for me and then blame people for not acting
  7. Preference for weak employees or candidates, feeling threatened by the super-competent employees or candidates
  8. Focus on small tasks, missing the big picture and favoring details  on a specific task where he is competent
  9. Inability to hire former employees: none of his former colleagues were convinced to join him in his new company or he is simply someone that never mentored anyone or never took time to inspire anyone that can trust him
  10. Not setting  deadlines, the work is done when is done …why bother with time boxed iterations
  11. Favoring consultants instead of growing his staff
  12. Letting his employees feel like an anonym and irrelevant person that does not make any difference if stay or go
  13. Not measuring and not giving feedback based on real metrics and expectations previously communicated and clarified (not on feelings and emotions)
  14. Not telling people what he is expecting from them
  15. Micromanaging
  16. Sneaky boss – someone that is continuously acting or talking behind his employees’ backs so they are never sure where they stand
  17. Managing his boss more than growing his staff , and this is sometimes ok, but in general only to protect his staff or the company from bad decisions coming from superior management
  18. Divide and Conquer – strong believer in internal competition more than in the internal collaboration
  19. Ignoring non-performers – usually we are tempted to build on strengths and recognize top performers, but at the end “A chain is only as strong as its weakest link”
  20. Stealing credit – if we win I will stick my name at the top, if we loose it is definitely because the team is not mature enough or understaffed or ….simply “acted without my knowledge, they need some control”
  21. Not believing that  HIS JOB IS TO BUILD THE TEAM AND THE ORG, AND THE TEAM AND THE ORG WILL FIGURE OUT HOW TO BUILD THE PRODUCT !!!
    My product as a dev manager is the ORG and/or the TEAM.

 

 



Applying the 80:20 Rule in Software Development

clock November 16, 2013 18:58 by author Administrator

80:20 Who uses What, What do you Really have to Deliver

Another well-known 80:20 rule in software is that 80% of users only use 20% of features. This came out of research from the Standish Group back in 2002, where they found that:

  • 45% of features were never used;
  • 19% used rarely;
  • 16% sometimes;
  • only 20% were used frequently or always.

 

http://java.dzone.com/articles/applying-8020-rule-software



Thomas Edison

clock October 14, 2013 22:15 by author Administrator

Time is really the only capital that any human being has and the thing that he can least afford to waste or lose…



Leadership in Software Development

clock May 19, 2013 21:14 by author Administrator

 

 

Leadership within the software development industry can be a tricky area. All teams require some level of leadership. Promoting members within a team or organization is a practical choice, but skills so highly sought after in development don't always translate into good leadership. Developers are very logical and analytical. In the world of DiSC, which is a behavior assessment tool, most programmers fall into the D (Dominance) or C (Conscientiousness) categories. These individuals are direct, accurate, and task oriented. Although these traits might seem appropriate there are many other facets to leadership. Unfortunately, most individuals are thrust into leadership roles with little experience or guidance. Regardless of the situation, members in leadership roles must take the responsibility seriously. Leadership is a continuous journey of learning, teaching, and growing. Knowing this, how does one gain those abilities?

There are a few standard answers to this question. First, everyone receives the opportunity to learn on the job. Although this method works, the road can be difficult to navigate without a map. Second, ask for book recommendations about leadership. Everyone has at least one favorite they can recommend. Third, find a good mentor. Having a proper mentor is an invaluable resource. Don't be afraid to ask individuals if they have time to sit and talk about leadership. Most don't realize the accommodating nature of mentors. Sometimes they forget that those individuals were once in their shoes.

The last option is the most difficult to achieve because good mentors are hard to find, but there are other avenues. Recently, Chick-fil-A® held their annual Leadercast conference. This one-day event tackles leadership through the knowledge and experience of industry experts and seasoned veterans. One can travel to the actual event or view it at a simulcasted location. The 2013 event featured Jack Welch, Andy Stanley, Coach Mike Krzyzewski, John C. Maxwell, Dr. Henry Cloud, LCDR Rorke Denver, Gold Metal Olympian Sanya Richards-Ross, David Allen, and Condoleezza Rice. The sessions were a rich combination of presentation and interviews. Events like this create a sponging effect where years of knowledge and insight are soaked up. Documentation of each session becomes vital to encourage the flow of information, but still allows time for reflection and maturation. Below are a few highlights from the 2013 event:

"The key in complexity is to see simplicity."
Some also refer to the quote: "Complexity is simplicity done well." These simple references highlight the importance of finding simplicity in each task required.

"You don't need to be the smartest person in the room."
This is an important reminder for leaders. One doesn't need to answer all the questions or always have the best idea. Empower others to make decisions and utilize the outstanding skills of each team member.

"Yesterday is gone... let go of yesterday."
This is a reminder to not let the sins of the past control the future. Mistakes are only bad if they are not used as a tool for learning. Keep a consistent eye on the future.

"Get it out of your head."
Develop a system to keep track of things. This system must be outside of the brain in a notepad or task list. If left in the brain, one will spend either too much time or not enough on each task. The brain excels at tackling one item at a time.

"If everything is important, then nothing is important."
Leaders usually have an overflowing plate of tasks and responsibilities. It's important to have a narrowed focus, while inhibiting everything else. This applies to the team's progress as well.

"Rules don't lead."
Don't attempt to build and enforce rules. Work with teams to devise standards that everyone will hold themselves too. Rules are meant to be broken; standards are meant be exceeded.

 

ref:http://java.dzone.com/articles/leadership-software



kent beck new post

clock March 17, 2013 23:32 by author Administrator
  • point of the "code/code/fix/fix" vs "code/fix/code/fix" question wasn't which is "right", but identifying criteria to use when choosing;

 

  • when the first thing i want to do to address the complexity of a program is write a complexity analyzer, i've gone too inception. just clean;

 

  • they are both expensive, but mutable singletons are way worse than immutable singletons because they create temporal coupling;

 

  • if everyone agrees it's a problem, the fact that you've begun solving it is pretty much all the permission you need


kent :Competence = 1 / Complexity

clock December 1, 2012 23:11 by author Administrator

This was one of my most popular tweets ever:

the complexity created by a programmer is in inverse proportion to their ability to handle complexity

I wanted to follow up a little, since some of the responses suggested that I wasn't perfectly clear (in a tweet. imagine that.)

The original thought was triggered by doing a code review with a programmer who was having trouble getting his system to work. The first thing I noticed was that he clearly wasn't as skilled as the programmers I am used to working with. He had trouble articulating the purpose of his actions. He had trouble abstracting away from details.

 

He showed me some code he had written to check whether data satisfied some criterion. The function returned a float between 0.0 and 1.0 to indicate the goodness of fit. However, the only possible return values at the moment were exactly 0.0 and 1.0. I thought, "Most programmers I know would return a boolean here." And that's when it hit me.

 

This programmer's lack of skill led him to choose a more complicated solution than necessary (the code was riddled with similar choices at all scales). At the same time, his lack of skill made him less capable of handling that additional complexity. Hence the irony: a more skilled programmer would choose a simpler solution, even though he would be able to make the more complicated solution work. The programmer least likely to be able to handle the extra complexity is exactly the one most likely to create it. Seems a little unfair.

 

I'm interested in how to break this cycle, and whether it is even possible to break this cycle. I'm certain that this programmer knew about booleans. For some reason, though, he thought he would need something more complicated, or he thought he should look impressive, or he thought the extra complexity wasn't significant, or something. How can someone with ineffective beliefs about programming be helped to believe differently?

 

wooooooow wooooooow woooooooow

 



kent beck new post

clock December 1, 2012 22:58 by author Administrator

In the optimization model of software design there are one or more goals in play at any one time--reliability, performance, modifiability, and so on. Changes to the design can move the design on one or more of these dimensions. Each change requires some overhead, and so you would like few changes, but each change also entails risk, so you would like the changes to be as small as possible, but each change creates value, so you would like changes to be as big as possible. Balancing cost, risk, and progress is the art of software design.

 

If you've been reading along, you will know that my Sprinting Centipede strategy is to reduce the cost of each change as much as possible so as to enable small changes to be chained together nearly continuously. From the outside it is clear that big changes are happening, even though from the inside it's clear that no individual change is large or risky.

 

One knock on this strategy is how it deals with the situation where incremental improvement is no longer possible, where the design has reached a local maximum. For example, suppose you have squeezed all the performance you can out of a single server and you need to shard the workload. This can be a large change to the software and can't be achieved by incremental improvements.

 

It's tempting to pull out a clean white sheet of paper when faced with a local maximum and a big trough. However, the risk compounds when replacing a large amount of functionality in one go. Do we have to give up the risk management advantages of incremental change just because have painted ourselves into a (mixed) metaphorical corner?

 

The problem is worse than it seems on the surface. If we have been making little incremental changes daily for months or years, our skills at de novo development will have atrophied. Not only are we putting a big bunch of functionality into production at once, we developed that functionality at less than 100%. Bad mojo.

 

The key is being able to abandon the other half of the phrase "incremental improvement". If we are willing to mindfully practice incremental degradation, then we can escape local maxima, travel through the Valley of Despair, and climb the new Mountain of Blessedness all without abandoning the safety of small steps. Here are some examples.

 

Suppose we have a class that is awkwardly factored into methods. Say there is 100 lines of logic, the coding standards demand no more than 10 lines per function, and someone took the original 100 line function and chopped it every 10 lines (I'm not smart enough to make this stuff up). What's the best way to get to a sensible set of helper methods? Incremental improvement is hard because related computations can easily be split between functions. Incremental degradation, though, is easy (especially with the right tools): inline everything until you have one gigantic, ugly function. With the, er..., stuff all in one pile, it's relatively easy to make incremental improvements.

 

Suppose we need to switch from one data store to another. Normalization is good, right? So the incremental way to convert is to denormalize. Everywhere we write to the old store, write to the new store. Bulk migrate all the old data. Begin reading from the new store and comparing results to make sure they match. When the error rate is acceptable, stop writing to the old store and decommission.

 

The literature and tools for incremental change betray a bias towards improvement. Fowler's "Refactoring" covers extracting methods more thoroughly than inlining them. Refactoring tools often implement varieties of extract before inline. To be fair, that's the more common direction to move. However, mastering incremental design demands being equally prepared to improve or degrade the design at any time, depending on whether it's possible to incrementally improve. In fact, sometimes when I have degraded the design and discover I still can't make incremental progress, I release my inner pig and make a really big mess.

 

tl;dr If you can't make it better, make it worse.



Introduction To Scrum

clock November 23, 2012 13:46 by author Administrator

This post is an introduction to Scrum, one of the Agile methods to drive software application implementation. Reading this post is a prerequisite to this post.

Concepts

  • Scrum projects deliver software application features iteratively.
  • Each iteration is called a sprint.
  • Scrum projects have 4 stages:
    • Planning – Definition of the vision, budget, and expectations. The first version of the product backlog should containing enough implementation items for the first sprint.
    • Staging – This is the first iteration where the requirements and product backlog created in the planning are refined.
    • Development – It is the set of sprints required to implement the project fully. It ends when the product backlog is empty.
    • Release – The final product is deployed, training is performed, documentation is finalized, etc… The release backlog can be used as the product backlog for the next release of the product.
  • There are 4 main roles in scrum projects: team member, product owner, stakeholders and scrum master:
    • A product owner is a customer representative acting as a single point of contact on the customer side.
    • A stakeholder is anyone having a vested interest in the project, providing information about requirements and participating to the decision process of features to be implemented.
    • A scrum master is a facilitator and single point of contact on the development team side. It is a support function, not a ‘controlling chief’ role. It is a mix between a team leader and a software engineering role, which does not include people management or profit & lost responsibilities.
    • A team member is a software engineer.
  • Scrum of scrum can be implemented when multiple scrum teams are working on the same large project. Scrum master and product representatives get together.
  • The product backlog is a list of features, use cases, enhancement, defect, etc… to be implemented in the project’s forthcoming sprints.
  • The release backlog is the list of features, uses cases, enhancement, defect, etc… which are postponed to the next version of the project (not next sprint).

 
Practice

  • Spring teams typically do not have more than 7 members.
  • A sprint duration is 30 days.
  • Scrum projects are client-driven. They select the features to be implemented.
  • Each sprint begins with two meetings:
    • The stakeholder meeting, with the scrum master and customer representative to re-prioritize the product backlog and update the release backlog.
    • The product owner and team meeting where tasks are created from the product backlog.
  • Each task must require between 4 and 16 hours of work. Bigger tasks must be subdivided into smaller tasks.
  • The scrum master and product owner check whether there are enough resources to support the efforts required to achieve the sprint and adjust the workload accordingly.
  • Team member pick their tasks and work on their implementations.
  • Each spring finishes with a presentation of implemented features to the stakeholders.
  • Every day, a stand-up meeting (15-20 mins) with the team members and eventually the product owner is organized in front of a white board. The following is discussed:
    • Which tasks have been achieved since the last meeting?
    • Are there new tasks? New requirements?
    • What is blocking? Any impediments?
  • If decisions have to be taken, the Scrum Master should take them quickly, within an hour if possible
  • The Scrum Master should deal quickly with any impediments and communication issues
  • Scrum teams should preferably operate from the same room

 
Conclusion

As for all Agile methods, the Scrum approach is best suited for new projects created from scratch. Over time, when successive application releases are implemented, the project can slowly transform into a continuous integration or even maintenance project, where less stakeholders input or daily supervision is required. The cogs are well oiled and operate naturally.

Reference: Introduction To Scrum



composition of unreliable services

clock September 18, 2012 21:25 by author Administrator

Kent Beck

"The mutually beneficial composition of unreliable services can be more reliable than a single service that tries to be totally reliable".

 

ترکیب دو سرویس غیرقابل اعتماد ُ قابل اعتمادتر و سودمند تر از یک سرویسی است که سعی میکند تا قابل اعتماد تر باشد



agile

clock September 2, 2012 23:01 by author Administrator

We are uncovering better ways of developing
software by doing it and helping others do it.
Through this work we have come to value:

Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan

That is, while there is value in the items on
the right, we value the items on the left more.



About the author

 Welcome to this web site . This page has two purposes: Sharing information about my professional life such as articles, presentations, etc.
This website is also a place where I would like to share content I enjoy with the rest of the world. Feel free to take a look around, read my blog


Java,J2EE,Spring Framework,JQuery,

Hibernate,NoSql,Cloud,SOA,Rest WebService and Web Stack tech...

RecentPosts

Month List

Sign In