Saturday, October 10, 2009

Travel

Please refer to my Chinese blog in China. This topic is safe to be put inside China.

Storage Area Network

1. What is a storage area network?

A storage area network (SAN) is an architecture to attach remote computer storage devices such as disk arrays, tape libraries to servers, in such as way that the storage devices appears locally to the operating system.

By contrast to a SAN, network-attached storage uses file-based protocols such as NFS or SMB/CIFS where it is clear that the storage is remote, and computers request a portion of an abstract file rather than a disk block.

A computer cluster is a group of tightly coupled computers that work together so that in many respects they can be viewed as through they are a single computer.  

2. Why we need storage area network?

....

3. How does a storage area network work?

Most storage networks use the SCSI protocol for communication between servers and disk drive devices, though they do not use its low-level physical interface, instead using a mapping layer, such as the FCP mapping standard.

Programming

Most people think that programming is only writing several lines of code. Actually a good programmer needs not only to understand programming language syntax very well, but also to play around with many tools and build up good programming practice. There ARE some super star programmers, but super star programmers could spoil your delivery since only he knows everything. When it comes to the customer support stage, he will be too busy to answer all the questions. One day he left the company, Wow, nobody can take over his job. The good software must be the collaboration result of the development team, test team and many support teams, plus your customer.

I tried to express my software engineering experience from technical perspective, there is an article on software engineering from project organization perspective in this website also.

First of all, some software processing people believe that the “bug density” of software is from software processing and quality control only, it should be independent of people. Yes, it IS from process and quality. However, the project organization, tech leader’s technical capability and whether the team are willing to do the project (don’t laugh, it is true sometime people just get annoyed and don’t want to do anything).

1. GNU development tools on Linux. 1

2. Other Tools. 1

3. Programming Languages. 1

4. Good practices. 2

1. GNU development tools on Linux

GNU development tools are a complete set of framework. They are absolutely free and powerful. Some people say that there is nothing but the GNU way to develop software, a typical Linux/UNIX software developer uses in his everyday work. GNU development tools are not worse than any other commercial (expensive) so called "killer-apps" widely used on other platforms. GNU tools are your friends for any development work done on Linux. Before you want to purchase new software development tools, think it over, I bet you could have a GNU solution to satisfy your requirement.

GNU native tools

This set of tools are used for x86/Linux native development I used to play with.

GNU cross tools

Linux platform virtually supports everything required for embedded development. Some company had to use windows cross-development environment simply because historical reasons. You may find information on Linux cross development environment here.

2. Other Tools

Other tools

However, there are still some commercial tools you should know. Anyway, everyone want to use free tools and sell their product with high price. If nobody buy commercial tools, many software engineer will lost their job also.

3. Programming Languages

Programming Languages

4. Good practices

Good Practice

Embedded Linux Development Environment and System Programming

Purpose: This course is delivered to the second year Master students at the Software School of Beijing University  From July 15th 2003 to Aug 28th 2003

1. Objective

This course is designed for master students and embedded software engineers to have:

  1. A preliminary understanding of key techniques and tools used to implement embedded software;
  2. Hands-on experience on GNU tools used to develop embedded Linux based software;
  3. Hands-on experience on basic features of embedded Linux system programming;

This course is not designed for the students to understand everything of embedded system, which is not possible also. Although we are only using GNU tools and powerpc platform as the example throughout the course, the students are expected to gain the capability to pickup other tools and target platform at short time frame based on their understanding of GNU environment and Powerpc platform.

This course is not intended to teach:

  1. Software design process and methodology;
  2. C/C++ syntax and Linux OS;
  3. Target platform (Powerpc) architecture;

However, Understanding of these domains is pre-requirement for the course.

2. Course Outline

You can expect the following questions be answered during the course.

First of all, why we need the embedded system?

How special is the embedded system? Or How different it is from the desktop environment?

What is the development life-cycle of an embedded software project? Again, how it is different from a desktop software project?

What are the GNU tools used for a complete development life-cycle?

I have seen some tools not covered by this course, how could I understand these tools quickly?

What are the basic features to embedded Linux?

Why these features are important? What are the philosophyies behind?

My new embedded software project uses a scheduler I have never heard before, how do I analyze it?

3. Instruction Methods

This course is a hybrid of:

  1. Lecturers;
  2. On-class hands-on;
  3. Group discussion;
  4. Off-class projects.

4. References

How to make your WindowsOS execute faster?

My Windows XP becomes slower and slower over the time. Therefore, I had to google the internet and look for tools and tricks to optimize my Windows XP as much as possible. However, there fore many so called "Windows optimization tricks" does not work as it specified at the web-page. After several rounds of retry, I listed those steps I think useful for my Windows XP optimization.

1. Remove unnecessary software and components, especially Windows Index services;

    Go to Control Panel-> Add or Remove programs, check those rarely used software, whether it is installed intentionally or unintentionally, then uninstall those you don't need.

    Go to Control Panel-> Add or Remove programs -> Add/Remove Windows Components, remove those components you don't need. For example, Windows indexing service.

    Disable unnecessary start-up items. Google "Windows Startup Manager" will list tools to help on this topic.

2. Delete unnecessary files;

    Many Windows optimization tools offer some utility to clean up the unnecessary files. I am using this one: http://www.ccleaner.com/

    The tool can only delete temp files created by some installation process or Windows application. If you downloaded a file and later forgot it, the tool won't know it. So it is important for you to make your directory structure organized. For me, I create a <Root>/Data directory to store all human created data (versus directory created by installation binaries). I always download any article to <root>/Data/temp first, and only move valuable binary or documents to my permanent directory structure if I think it is useful.

3. Optimizing you Windows appearance

    Go to Control Panel->System->Advanced->performance Settings->Visual Effects, choose Custom and tick:

         Show shadows under menus

         Show translucent selection rectangle

         Smooth edges of screen fonts

         Use drop shadows for icon labels on the desktop

         Use visual styles on windows and buttons

    You may have some other preference, but I found this tick-list is most close to Windows default and what I feel comfortable.

4. Stop unnecessary Windows services:

    Windows starts a lot services by default, most of them are not necessary, so Go to Start->Cmd->services.msc and select Which services you want to stop. However,  many web page on optimization tricks ask you to go through the service one by one and stop a lot services, which maybe harder for novices.  Besides all the default setting, I only changed 3 of them to manual: Remote Registry, Telnet, Indexing Services.

5.  Clean up registry;

     It is dangerous to edit Windows registry (though sometimes I do), better find a tool to do it automatically.

6. Irq14=4096.

    It is said add Irq14=4096 will improve the hard-disk performance. I did so, but not sure whether my hard-disk performance is improved, nor did I figure out the reason. The paper said is increased the buffer for hard-disk read/write, but I did not figure out what is the default buffer setting.

7. De-fragment your hard-disk regularly;

    Find you C: or D: drive at Windows explorer, and click property, use your Windows tool to de-fragment the hard-drive, and reboot it. 

IT Services

Many people came to ask me about some IT configuration questions. This page lists most of my recommendation for configuring PC and SOHO network. It is open for discussion. Please send your comments to jinsheng_tang@yahoo.com

1. Setup your PC.

It is better to setup your PC dualboot with Windows XP and Linux if you are an IT professional or willing to be, otherwise please check the Windows section only.

It is easy to setup your PC with Windows and Linux dualboot, please google "config Windows Linux dualboot" for detailed procedures.

The recommendation here is:

1. Partition your hard disk with 8-10G for Windows OS, which will be used for install of Windows OS and application only.

You may wonder why so much disk space is required, it is reserved for multimedia tools, various Java virtual machines, documentation tools like MS office and Visual studio and Oracle tools.

You may need 2-4G more if you want to play around with Virtual PC, but it is not encouraged as I felt it is very slow and not worth for use. I'd rather use 2 PCs instead of setting up Virtual PC.

If you are not going to play around with Oracle or Java Virtual Machine or Visual Studio, I bet you need several multimedia tools. So reserve 5G at least.

2. Assign 6-8G for Linux

Redhat Linux 9.0 needs nearly 5.0G for a complete installation plus Linux swap and boot, You may want to deselect many language package during installation to save some disk space, or sort RPM by size (please google some perl script to do it) and remove unnecessary RPM with size over 1G. Eventually around 1G space could be saved.

I have encounter the problem that some automatic compile script or Makefile can not be executed at a Linux specific file system (ext2, ext3). So I reserved 3G space for building cross development environment or running big automatic compile script.

3. Leave the rest to store data only and make this partition dual accessible by both Linux and Window OS;

It's better to decide your directory structure and don't change it often even when your computer changed. It's easy for backup and maintenance.

2. Windows

Recommended applications to be installed for Windows OS.

Full set of Microsoft office ;

Adobe Acrobat;

MyIE2 (browser);

Ultraedit;

Antivirus serious;

金山影霸,RealOnePlayer, 暴风影音。

download tools: nettransport, cuteftp,

Quick Screen Capture;

System maintenance: System mechanic or Windows optimizer;

3. Linux

Linux is most installed by IT professionals to investigate, it is still weak for common desktop applications.

I usually use all the default packages for desktop usage.

For office document, use OpenOffice.

If you are a far east user, please refer to http://www.opencjk.org/projects/wineinput/index.html

If you are an IT professional and want to investigate some advance usage, please refer to

a. build a cross development network at Embedded section;

b. Setup SOHO network by Linux.

4. SOHO

Many sales people may approach you to sell their product for you SOHO network. I prefer to use Linux for all of these, although it usually means lack of technical support from suppliers. Actually you could find almost all solutions from internet.

The following configuration are essential for a software development based company.

a. networking. It is recommended to get a dedicated low-end router plus several switch although you could config one Linux machine as the router.

b. Define IP address range, bind your IP address with MAC address so it is easy to manage.

   Always use DNS to assign the IP address.

c. Config IP tables or your router may have some way to have firewall configured;

d. Setup NIS (Yellow Page service) so that all your intranet use can use the same account regardless of where he logged in;

e. Setup quota limit for every user accounte.

f.  Setup a common temp file for all users so that they can exchange huge file even it exceeds the quota limit.

g. Monitor your network if necessary.

h. backup all your data hourly, daily, weekly, monthly.

This page has links to some useful online resources for configuration that I have been used.

They could be used for setting up either desktop environment or embedded development environment.

How to setup J2EE, J2ME for Mobile Applications?

How to run IIS in Windows XP Home Edition?

How to install a Linux Virtual PC?

How to use Chinese in Linux?

How to setup WinCVS with SSH?

How to create a Linux daemon?

How to run XWindows Applications Remotely?

How to use ramdisk for Linux?

How to do DiskOnChip development?

How to config Apache?

How to debug Linux kernel?

How to use cgi-bin?

Definition of Real-Time OS

A lot of people think Real-Time OS means fast, actually, RTOS means the process deadline can be met generally (soft real-time) or deterministically (hard real-time). An RTOS is valued more for how quickly and/or predictably it can respond to a particular event than for the amount of work it can perform over a given period of time. Key factors in an RTOS are therefore a minimal interrupt latency and a minimal Process (or thread, which is different) switching overhead.

This is one of the OS chart I created to explain OS task switch.  It helps to explain time-slice based OS task switch, with interrupt support.

PreemptOS

In general, there are two design approach associated with this. Preemptive OS and Cooperative OS, A preemptive multitasking OS will interrupt a running process when its time-slice is up (or any other reason). Cooperative multitasking, on the other hand, relies on the process itself to be nice and hand over control to other processes when it does not need it. In the latter case, a poorly designed application can easily monopolize the entire machine.

Project List

As the time goes by, I am afraid that I will forget what I did in the past, so I list my past projects for reference.

Project Name Duration Location Responsibility Project Summary
DS2000 CPR 24 months Guangzhou China Developer Develop power on self-test program for a dual write processor Board, and conduct the test.
DS2000 DSW 12 months Guangzhou China Team Leader PCB layout design with Mental Graphic Tool.
Telecom Training 12 months All over China Presenter Pre-sales support, present telecom infrastructure products.
DORMIA 24 months Singapore Sole Developer A distributed system for mobile information access, with multicast middleware support.
cdma2000 RNC 15 months Singapore Key Developer Embedded Software /Object Oriented Modeling/C++ coding/Telecommunication domain, to build Radio Network controller (an element for CDMA infrastructure).
UMTS CLPc  7 months Singapore, Dallas-Fortworth Team Tech Leader Another Software based element for CDMA (UMTS) system, in Telecom System/HAP Platform/C++ coding domain.
ISTUE 6 months Singapore Project Manager Integrated Set-Top Box User Environment, a prototype on Windows Component Based architecture for feature development.
CDMA2000 System Integration 6 months Chicago Feature Leader 3G Wireless Telecom System Integration Lab integration and site integration support.
Internal BTS Router 8 months Singapore Team Test leader Embedded Software Element/Transport Layer/Real-time Linux to implement basic IP Routing functionality.
Insec 6 months Beijing, China Department Manager IP Security/Real-time Linux, port Freeswan to MPC860 environment and add new security feature to Linux kernel.
XMRadio 1 year Singapore Project Manager/Architect A software component to handle US Satellite Radio Receiver.
CMMi Process Improvement 5 years Singapore Contributor A lot of activities to identify and improve CMMi level for two global firmware organizations, from various CMMi perspectives.
Software Entropy Reduction 3 years Singapore Contributor The aggregation of many activities to reduce software entropy of a codebase more than 1 million NCSL, and related to software production line to some extend.
Solid State Drive 2 years Singapore Contributor Solid State Drive Controller.

Light-weight CMMi deployment

I was talking to one of my ex-supervisor in Motorola on his experience of deploying CMMi to another organization out of Motorola. There are some interesting findings about customizing CMMi for different organization cultures.  As Motorola has officially announced closing down of its Singapore Software Center, now we can comment a bit on their CMMi process, and what other organizations can learn from it.

Motorola is famous for the CMMI level-5 deployment to its various software centers across the world. These software centers are such process oriented organizations that low to mid management level take it for granted that the software requirement, design, implementation and testing document must be in place, and process audit must be in place. There are plenty of diagrams, analysis reports generated that becomes over-complicated, for good and bad.

In early 1990s, Software was such a special technical skill that only a small group of elite people can do it well, so Motorola setup various software centers around the world, with software process deployment initiative. Both the organization model and the process proved to be quite effective initially. Later as the software complexity grows, software process also grows to ensure the quality and efficiency of the code, which is excellent. However, these centers are far away from the end customers. The marketing team cannot transfer their pressure to software developers effectively in this organization structure.  The mid-level management turns to process focus, instead of customer focus, because that is the instruction implied by the evaluation process (for sure top management still emphasize they are marketing oriented).  All software projects are internal, which are quite easy going. When the profit margin of software business goes down and the company eco-environment got worse, eventually the company cannot afford this over-complicated business model and close down the center. 

This does not mean it is not possible to marry the good merit of process-oriented approach to innovation-oriented approach. One compromise way we found is to simply the CMMi process and deploy it implicitly. More specifically, requirement and software configuration management are the two most important KPAs a software organization needs to pay attention to, next comes implementation and test define. i.e, to implement what you are required to implement, and test what you are required to test. By telling your customer about implement customer requirement, and control your software team so that the entire team works towards the same goal, instead of working against each other (not personally, but because either Architect or process has flaw, I have seen a lot examples that people work against each other.),  you have the CMMI corner stone lay out.

The process effort should not exceed 5% of the total project effort. If there is no full time process team, then do 2 things:

1. have one internet-savvy young engineer to update and maintain an intranet to publish requirements (it is important to put requirements, designs, … into a common repository). 

2. List the key process flow in one Page (A4), make sure the font size is readable, let the team has free access to it, and have an experienced Engineer to audit regularly (biweekly or monthly).  A white paper (such as Software Development Manual) more than 2 pages is only useful for process team, nobody in the project team will read it. So DON’T waste your time.

In summary, the ultimate goal for an organization is not to reach certain CMMi level (though it is a side output, necessary for many human performance evaluation), but to make the final output predictable and quantifiable. Process is like lubricating oil, you won’t feel it if your machine running smoothly, and it is actually performing its duty implicitly. The most important thing: Software Process must come from real-life experience.  There are endless documents out there, people won’t convinced if you are only repeating a book story.

Why Embedded

Linux now spans the spectrum of computing applications for embedded world and there are endless articles on internet about embedded Linux. This article is the collection of some thoughts while teaching embedded Linux at Beijing University and lead related project. Hope it could be helpful for people with some computing background to ramp up on this domain.

Why Embedded?

The computers used to control equipment, otherwise known as embedded systems, have been around for about as long as computers themselves. They were first used back in the late 1960s in communications to control electromechanical telephone switches. Thousands Chinese engineers was working on digital exchange development in early 1990s, I was fresh at the time and lucky to be one of them. I was writing some boot code for 8086 board and play around with Logical analyzer everyday, but did not know the word “Embedded”. Only after several years working, I suddenly aware: this is embedded.

As the computer industry has moved toward ever smaller systems over the past decade or so, embedded systems have moved along with it, providing more capabilities for these tiny machines. Increasingly, these embedded systems need to be connected to some sort of network, and thus require a networking stack, which increases the complexity level and requires more memory and interfaces, as well as, you guessed it, the services of an operating system.

Off-the-shelf operating systems for embedded systems began to appear in the late 1970s, and today several dozen viable options are available. Out of these, a few major players have emerged, such as VxWorks, pSOS, Neculeus, and Windows CE.

How to evaluate your firmware debugging environment?

It is quite common that embedded system developers spent more time figuring out their environment before they can actually test-out their code. Therefore, it is vital to select the right tool in all stages of the entire life cycle. This article tries to summarize the experienced gained on embedded system development environment.

1. Get a good external/internal debugger.

2. You may not have the luxury to connect to an external debugger (for example, it is at factory production, no space to sold a JTAG connector), then use the ancient time method: print out to serial port;

3. You may not have the luxury to print out due to performance reason, but at least keep an UART connector, intrude the code to dump trace to flash or even DRAM, and find a way to extract later. If it is flash, you are so lucky that you can take out the equipment and analyzer somewhere else. I like it.

4. Logical Analyzer can be used to debug CPU execution indeed, provided that you have the corresponding Pod and software package. The distinct advantage is it can trigger/capture with external signals. JTAG can help but not as comprehensive as Logical Analyzer.

What else?

How to design a internal debug environment and exception handling system is a big topic? You are welcome to discuss the general rule of "good" debug-ability?

An ARM CPU mode diagram

I had several rounds of discussion with ARM support Engineers and have created a PPT slide on ARM mode. Thought it will be very helpful to understand the difference between various ARM CPU modes, and how to switch from each other.


Can you figure out the difference betweem ARM IRQ and FIQ from next diagram?

A metaphor for distributed system

There are many official, or rather impenetrable explanation of distributed system. When I was studying it in NUS, it really took me sometime to understand it.

Actually, I think the distributed system can be easily explained by comparing to our traffic system. Every vehicle is a self-controlled system, with its own thinking and follow certain rules, and the entire system works well. A traffic accident is either because the law is not good (very unlikely), or because some unit break the law, then the entire system has someway to recover.

WYSWYG Embedded Programming

WYSIWYG programming is a popular term used in PC software world, such as web design or PC object oriented programming. I am a big fan of tool aided code generation for embedded world, more specifically, use actor incarnation and Finite-State Diagram as the example below.

BTW: Please note Object Oriented style of programming does NOT mean Object Oriented Language. I could not find a suitable term for it, so I call it “WYSIWYG Embedded Programming”. 

clip_image002

An example of Actor diagram (instance of Object).

clip_image002[7]

Protocol (structure definition) and FSM (dynamic behavior) example.

The question here is about reliability. You are comfortable to browse a web page generated through WYSWYG. If you are driving a car with auto collision-avoidance controlled by auto-generated code, do you trust the firmware or the human?

My answer is WYSIWYG tool is more trust worthy. Same as in PC world, our embedded team is usually composed of Engineers from all levels and various backgrounds. The great advantage of WYSIWYG is that WYSIWYG allows the user to visualize the target break down static and dynamic behavior, and even step/debug graphically. It not only helps the developers to gain better understanding of the architect, but also enables the team members to collaborate more efficiently based on their better understanding. Especially if there are many new team members.

With the structure breakdown, it is easier for developers to design the code to cover more comprehensive cases, and find out more corner cases for testing. There are cases that non-scalable architecture eventually makes the project next to impossible to have one human to oversee all logical paths,  and costs enormous amount of effort to do repeat but not efficient testing.

WYSIWYG may have impact on Performance and code footprint overhead. This is another solvable issue. There are various efforts in industry tackling this, which worth to discuss separately.  In fact, my experience shows human factor may pose more overhead.

A Light-Weight SCM site

There are plenty of resources regarding Software Configuration Management on the web now. However, normally a development team really need is only several key ideas of how to manage their codebase or documents, and limited number of document templates to start with.  Some team even cannot afford to have a full time SCM manager. It is not necessary to study the theory of SCM through.

SoftewareCM.org is created to help beginners to start within several hours. There is also a full vendor and resource sites list, plus remarks no more than 3 sentences. I came across this site occasionally. They have a good start point.

Software Configuration Management

I am heavily involved into the global effort of one big MNC to improve the Software Configuration Management at corporate level now. This situation reminded me that I should create a reference article on my understanding of software configuration management, and all the tools I have used, like CVS, ClearCase, SourceSafe, Telelogic Synergy and Perforce.

1. What is Software Configuration Management

Roger Pressman, in his book Software Engineering: A Practitioner's Approach, says that software configuration management (SCM) is a "set of activities designed to control change by identifying the work products that are likely to change, establishing relationships among them, defining mechanisms for managing different versions of these work products, controlling the changes imposed, and auditing and reporting on the changes made." In other words, SCM is a methodology to control and manage a software development project.

I think the modern SCM has become a concept interference with the development process and software architect. But first of all, let's start with the conventional (or tradition, standard, basic, or whatever you want to call it) concept of software configuration management - to manage the software variation over time. In other words, to version control the code, tracking/managing the change with label/version number, and reproduce the release upon request.

2. SCM vs. other Software Engineering Items

SCM is closely related with the development process and the Architect of the software system. More can be found at http://www.softwarecm.org/

Software Engineering

I like the definition of Software Engineering at Wiki: Software engineering is the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software. If you ask one hundred Software Engineers about the scope of software engineering, you might get 100 different answers. In my view, there are really 2 subcategories only:

  1. How to present the software design? In other words, Software Modeling, or more concrete word, graphical representation of software blueprint;
  2. How to make software engineers collaborate towards the same goal (instead of work against each other, don’t laugh, it happens). Or in other words, how to manage the space or time diversity of a software product, and how to break down the complexity of a software/firmware product (modularization).

The era that a single software hero creates an entire industry successed software product has gone. OK! Don’t argue with me that Linus Torvaldes is maintaining the Linux kernel by himself (still with other people’s help). There is only one Linus Torvaldes in the world, you and I are not as smart as him, I am talking about what’s happening in the industry. The industry trend is really how to manufacture software product through collaboration effort of teams made up of normal people, and located across the globe. Software architects and managers should really concern on making software implementation work and resource allocation more predictable. Has been saying that, they should really think about the above 2 Software Engineering questions.

In my opinion, these 2 questions could be expanded to a lot more questions as below. There are tons of books to discuss these questions. However, I merely list the possible questions for reference.

How to present the software design?

  1. What is the most efficient way to model a software system? Why this modeling important?
  2. What is the suitable tool for software modeling in your project?
  3. Is it possible to auto-generate the code or code stub?
  4. How to bridge the gap between business requirement, software design, implementation, testing and support? In other words, what is the best way to represent the software requirements?
  5. How to control the auto-generation overhead? If the auto-code generation overhead is not acceptable, is there anyway to do it manually, retain the graphical design and write the efficient code manually?
  6. Is it possible to test before implementation? Sounds weird? However, it is possible to just test code stub, which will be very useful for design validation at the early stage.

How to make software work predictable?

  1. How to design software components or improve the modularity of existing codebase?
  2. How to control the time variation? This comes to the topic of Software Verstion Control Management.
  3. How to control the space variation? In other words, software reuse or variation based software reuse.
  4. How to estimate software effort based on software components?
  5. How to predict the software productivity based on Engineers capability and component complexity? Subsequently, how to make a realistic software schedule.
  6. How to make test more effective? In other words, how to setup an effective test environment which is capable of generating many corner cases. How to quantify and improve test coverage?
  7. How to make new Engineers (or all Engineers) easy to ramp up on one project?
  8. How to break down the product scope and make project risk more predictable?

About Software Modularity

I am involved in a project to make a big code-base (about 1 million lines with comments) modularized, then there are two interesting questions:

1. What does software modularity really mean?

2. How does software modularity impact the full life-cycle of the development?

1. What does software modularity really mean?

The software modularity could mean many things, for example, each software module could be compiled independently, no header file or logic dependencies on other modules The software modularity could also mean each module could be executed independently and link at runtime, like Windows DCOM or Windows/UNIX shared library.

Normally any software will start with some form of modularity design. However, the modularity may deteriorates along the way if the modularity is not well maintained or the original modularity design is not good enough, "not good" means no way to ensure modularity at tool level, all depends on human best practice.

2. How does software modularity impact the full life-cycle of the development or even the success of the project.

The software modularity level creates huge impact on the project execution or even success/failure. It not only causes technical issues, but also impacts management. A good modularized code-base enables a group of experienced Engineers focused on the overall picture without the need to look into the details, while other groups of engineer can focus on their own module with less in-depth knowledge of the overall picture. Modern software projects are almost for sure with a big team of developer/testers involved, no-one has enough capacity to know everything, and if someone really does, it is not a good sign for the company:-).  Better level of modularity provides the key foundation for the success of big team collaboration.

Having thought of these two questions, the subsequent question will be:

How to design a modularized software?

Or worse case: how to make current software modularized?

I have two related to discuss these two questions:

1. Component based programming (Object-oriented style and Finite-State Machine based model) to design a modularized software system;

2. Iterative and incremental improvement of software modularity for existing system.

Component based programming (Object-oriented style and Finite-State-Machine Model)

Everyone wants his software project executes smoothly. In those project I experienced or heard of, it is almost guaranteed that there are complains like:

1. the developer will complain that the requirement is not concrete enough to make a design document when the project is about to start;

2. The developer and marketing guy complain that the development result and requirement does not match;

3. The engineering team complains that the code-base becomes so complex that nobody can have a clear full picture, a lot of behavior is unpredictable. therefore, the management team has to push the engineers work harder and harder to meet the project deadline.

4. The development team complains that the testing team is not testing what expected to be tested.

If you ask around the solution, there will be many solutions. For example, improve management communication channel, provide more training, find more competent people, and so on. I agree with these points, but I view the root cause has a technical solution as well. I think we need to find an effective way to map marketing requirement to design document, or in other words, to translate between engineering language and business language. And I think Component based programming, plus Object-oriented style (Please note: there is no requirement to use Object-oriented programming. ) and Finite-State-Machine Model, is the way to do it. I also this this is applicable to firmware on embedded system or regular application for PC platform.

This is a long topic, can be easily discussed for days or even months, so I better stop here. There are many books talking about this topic, and many white papers, company website (like telelogic). One of my favorite company in this domain is telelogic (now part of IBM).

Define effective Policies/Procedures/Processes

Recently I came across this, which I think is quite useful for defining the short and usable processes/policies/procedures and so on for software development. I hate the long text description of software process documents, which eventually nobody exactly follows and creates a lot meetings/telcos/emails for clarification and discussion.

http://www.stsc.hill.af.mil/crosstalk/2006/06/0606Olson.html

And some templates can be leveraged.

http://sepo.spawar.navy.mil/Process_Assets_By_PA.html#OPD

Software Production Line

I heard this jargon about 18 months ago from Charles Krueger (www.biglever.com), at the time the organization I am working for suffered greatly from maintaining similar functionalities/implementations (I would say 90% commonality) for sheering number of products. We were using a branching/conditional flag (#define) mixed mechanism to tackle this problem. 

The mechanism had been there for years, initially works fine. As the codebase complexity grows (linearly in terms of lines, but exponentially in terms of functionality integration), We were facing the problem that one fix for product A does not go to the codeline of product B which use the same ASIC/SERVO/..., and the team of product B has to work on it again. A even worse case would be if the team for product B does not even aware the issue is solved for product A, and they created another solution, works fine for product B, and create one more problem when the codeline of product A and B integrated. It becomes a nightmare to some extent!!!

However, the issue still exists by now after 18 months effort. This reflects how difficult it is to resolve or influence the organization. I have no intention to reveal what the organization has done due to confidential issue, just some notes for recording purpose along the way.

The summary of Software Product Line, based on my understanding from discussion with Charles Kruger and learned from Web.

Software reuse/commonality is an increasing important issue across the industry. conventionally, there are 3 ways to resolve the Software product Line issue.

1. Simply use code-line branching, conditional flags, file parsing script to handle the problem.

It works fine for some organizations, especially if their product comes from the same codebase but very rare future commonality, and especially if the branching convention or conditional flag convention can be defined properly, makefile (or use a bigger name - build procedure) and the organization get used to it.  This approach can work even better than the advanced approaches below because there is no deployment or specific domain knowledge overhead.

2. Tools like Koala compiler, can take in variant definition, generate and codebase and compile. This approach has pros and cons, depends on the tool is implemented and deployed to the organization. The general rule is there should not be run-time overhead.

3. Tools like Gears. As Charles Kruger said, it is a totally new approach to replace all the above methods. I have strong interest to test it. However, I have not tried so far for various reasons, so no comment.

Based on my observation, many organizations could do some fundamental stuff on improving the SPL situation, before they jumped into all the technical concept and tools.

1. To the bear minimum, make each component into its separate directory and make it pass compile independently; The code modularity helps mitigate the software variation problem significantly. At least with the design and test work partitioned,  the number of issues won't multiply to each other.

Soft approaches like improve team communication, have better development policy will mitigate the problem, and these soft approach won't solve the componentizing issue. The hard approach is to generate a compile time (better), or run time error when the regulation is violated.

2. Define searchable conditional flag conventions/or branching naming convention For code with over 70% commonality, use source file interpolated with conditional flags. If you worry about the readability, find a editor to help you;

1) Keep you branch as minimum as possible, avoid local workspace.

2) have some home-grown flag tools if necessary.

3. The software componentizing solution (which partially solves the software variation problem) is not only limited to compilation stage. It is inherited from design, and extends to unit test, system test, production, and so on. There could be many things to talk on this. but to the bear minimum, have a way to do proper unit test, or subsystem test.

4. It is important to keep modularization and have reasonable diversity control mechanism (branching policy/convention, conditional flag naming convention, clustered makefile, or variant control tools) in place priori to implementation. In reality, it maybe possible for small to medium scale project (<20 man-years). It usually not the case for industry scale large project, which can easily go to 1000 man-year effort (including deployment support). I have attempted to correct the codebase direction (1 million line code base) halfway. It is difficult, in terms of technology, as well as organization culture, and impact to releasing (customer call always have the highest priority). I can easily take more than 20 pages to describe the feeling.  However, the answer in short is to do it incrementally, and feedback the quality result to management team. Be prepared, it is a long journey.

Eliminate modeling/modularization overhead for embedded system

Software modeling (especially the State Diagrams, my favorite), and modularization certainly helps breakdown a complex software architect and enable multiple teams to work together. It is essential nowadays because most of the development organizations I know have their Engineers scattered around the world. It is next to impossible to make them work together efficiently without modeling/modularization.

However, one concern from the embedded world is the overhead, more specifically, the static overhead -- memory footprint inflation, and dynamic overhead -- performance penalty. My observation is it is possible to keep this overhead as minimum as possible, or even make it zero by using compiler preprocessor (#defines). For example, a "#define FUNC       func(); " decouples the logic but does not impact the compiler output.

Another observation is even if the modeling/modularization caused some overhead. There are more overhead if 2 teams are working on the same issue but located at opposite side of the planet. They will create duplicate resolutions, and sometimes even worse, create the third resolution to resolve issues from combining the previous 2 resolutions.

Thought on Microsoft Requirement Management

Some of my friends and neighbors, who are not IT professional, asked me why their computers become slower and slower and they usually need to reformat the hard drive every 3 to 6 months. I usually run some clean tools, uninstall some extra application, and defragmenter their hard-drive. It happened very often indeed, sometimes even my relatives in China will ask me.

I don't understand why Microsoft does not want to put it under a default background task for Windows, execute when the CPU got enough bandwidth. Just like they reserve 20%  of the network bandwidth to download Windows update upon network idle. It should be a super trivial project for Microsoft, maybe just some crontabs. Or MS have done that but I don't know?

I was involved in the CMMI improvement work for Seagate. So I am wondering how effective is Microsoft requirement feedback process, how their marketing team collect the requirement, and how they ensure the Engineering team delivers the product consistent to the Marketing requirement.

From the project I experienced, or have insight of, or learned from web-pages, I think to reach the ultimate goal - requirement integrity (I steal this word from hard-drive data integrity), the most effective way for requirement gathering is to have a web based discussion forum, and itemized requirement, version control on the conclusion is desired also, and have some one responsible to answer questions. An requirement can be itemized, and linked at business level, system level, and software level. The web discussion forum is for collecting offline comments, and a meeting to draw the conclusion is desired. Diagram is also desired to explain the idea.

Motorola vs ZTE.

The rumor has been around for a while, that Motorola is going to sell off its mobile phone business, and one of the possible buyer is ZTE from China. As a telecom veteran, I happened to have working experience for both companies. These two companies have completely different culture.

In principle, Motorola is a company with many talented Engineers, who take pride of their technical achievement.  You will be convinced just by looking at Iridium project.  I used to be an insider, and have chance to work for their CDMA project. I am convinced after experiencing the maturity level of Software Engineering, from both process and technical perspective. However, Motorola has other problems, should be hard to grow their business. ZTE has many talented Engineers also, but its top priority is making money (though officially all companies top priority goal is to make money), with all kind of means you can or cannot think of.  Of course, there is nothing wrong also.

Anyway, I don't think the acquisition can turn into reality now or someday in the future. Even if it becomes true, the new joint company will not generate any profit. 

最近MOT要出售手机业务的消息传的沸沸扬扬, 而传说中可能的买家之一, 就是中兴通讯。作为通讯界的过来人, 我正好在这两家公司都工作过,对两家公司的文化都有所体会。我无法想象这样的两家公司怎么会和到一起去。一家富有绅士风度,以工程师文化为荣。另一家以赚钱为第一目的(当然,也没有错)。这个合并是一件不可能的事, 即使发生了, 也没办法盈利。不过, 我很佩服以前的那些同事们, 十年的时间把中兴做成了一间大公司,祝他们继续取得成功!

Executive Summary

This space is mainly designed for experience sharing on embedded,software, communication domain, as well as discussion. Although my native language is Mandarin Chinese, I prefer to write technical documents in English.

I feel more comfortable to talk about non-tech stuff in Chinese. Due to well known internet policy issue, I split my Chinese blog to sina and wenxuecity.

You might see a lot of posts in Oct 2009, which is because I migrate all of my past posts from MSNLive to blogspot during my virtual China’s national holiday week. Please feel free to comment on any topic.

If you have interest, I have my linked In profile here.

The best practice for programming

I was asked about the best practice of programming.  As far as I know.  There are plenty of such documents available online. So I just list something I learned from my passed projects. Most of them are independent of any programming language, some even not direct related to coding. But I like the saying: There is more to driving than knowing how to operate a car.

1. Communication! Communication! Communication! Talk to anyone you think necessary within the development team, from expert to novice. Present the idea in diagram. Make sure all developers are working with each other, not against each other. I myself feel boring to read theory more than half day, the learning/training must be through the iteration of hands-on/discussion/thinking/reading. This is not limited to programming.  The thumb-rule is all developers must know they are coding towards the right common goal, which is more important than any so called best practice. Obviously if your organization set wrong direction, then we cannot help at this level.

2. Always define meaningful naming convention for everything, like variable, function, conditional flags. Try to make the code self-explained. I am not saying we can ignore comment, but it is hard (if not next-to-impossible) to ensure everyone always keep their comment sync with code in a large project.

3. Always define a coding style (even you are the only developer). Enforce coding style by static checking tools if necessary. There are auto-formatting tools, but I think those are useless for big project.

4. Pay attention to the default case. Assume there are 2 possibilities: A, and B. Be careful to use

   if A   do something for A;

   else   do something for B.

One day if you have possibilities: C, the above example will be a bug, and so on for switch, ..#if #else, ...  Try to make explicit condition, and create warning/generate some notice for error case;

5. Use debug macro and make sure the debug macro can be disabled per severity level or functional area, can be turned off (instead of remove) in the release code.

6. For a large project, make it clear for public or private functions, and have someway to detect error (compiler, script checker, etc) automatically.

7. A common error: if (  A=B ) instead of if( A==B ), which will assign the value of B to A, and execute the block with the new value of A, which is B. So be careful to have proper coding style, and you can check this by “grep” or script.

8. I like the Model based design, layered structure, and power-points to explain the flow in diagram, try to use these for documentation. BTW: If your model can be used to generate code, then it is a live modeling. Otherwise the development team will use it for a while for various reasons, but later the model will be dead.

9. For senior level developers/tech leaders, make sure you have a clear idea on logical partition, and software configuration management, or at least have an idea for discussion with management and developers. Make sure, more developers should mean more contribution to the final dead-line, NOT more chaos.