HCL Interview

March 28, 2017 at 11:16 am | Posted in Uncategorized | Leave a comment

Earlier I wrote about my interview experience with NetCloud Systems Bangalore. Recently I appeared for an interview with HCL Technologies. (HCL Technologies is on the Forbes Global 2000 list.[13] It is among the top 20 largest publicly traded companies in India with a market capitalisation of $22.1 billion as of May 2015.[14] As of August 2015, the company, along with its subsidiaries, had a consolidated revenue of $6.0 billion.[5]  — Source: Wikipedia )

Interviewer was quite younger than me. He asked me to write a program using a singly linked list where he wants to remove Nth element counting from last node you added. e.g if you added 10 elements in the list then 7th element from end is 4th element you added in beginning, hence 4th should be removed. If 1,2,3,4,5,6,7,8,9,10 are the elements you added then “./a.out 7” should remove 4 not 7.

I told him I can use singly list as stack where elements are always added in reverse order. From his looks I could make out he could not understand what I just said. So I asked him if can I use any method,he replied that I should use pointers and gave  a paper and pen to write. This is the full-fledged code with all the checks and added command-line input and string conversion etc. but algorithm/logic/data is exactly same I wrote there and he said this code will not remove 7th from end but 5th from end.   This is wrong code. I was shocked to hear that. Loot at it yourself and see the output:

/* HCL Interview Question (2017)
 * A singly-linked-list (SLL) program to add nodes to SLL and remove Nth node
 * from the end
 * e.g Add 10 nodes to a SLL & remove the 7th node starting from the end
 * 7th node from end is 4th node you added while building SLL
 * I am using a Stack (LIFO). Last node (the end) is always the latest. So, we can
 * go down from there easily. The interviewer said it will not work.
 * Worked fine for me 🙂

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <errno.h>
#include 			<limits.h>

struct node {
  long nn;
  struct node* next;

struct StPtr {
  struct node* head;
  long tn; /* totoal number of nodes */

struct StPtr* st;

void print_diagnostics(void);
void printStack(void);
void convert_to_long(long*, long*, const char*, const char*);
void addNodes(long, long);
void removeNode(long, long);

int main(int argc, char* argv[]) {
  if(3 != argc) {
    return EXIT_FAILURE;

  st = malloc(sizeof*st);
  if(NULL == st) {
    printf("Out of Memory\n");
    return EXIT_FAILURE;

  st->head = NULL;
st->tn = 0;
  long t = 0, r = 0;
  convert_to_long(&t, &r, argv[1], argv[2]);
  removeNode(t, r);

  return EXIT_SUCCESS;

void removeNode(long num, long rem) {
  if((0 >= num) || (0 >= rem) || (rem > num)){
    printf("Nothing to remove\n");
  else {
    struct node* t = st->head;
    struct node* prev = st->head;
    long i = (t->nn - rem) + 1;
    /* IF we ar eremoving the head */
    if(i == st->head->nn) {
      st->head = st->head->next;
      printf("Removing node#: %ld \n", i);
    else {
      while(i != t->nn) {
	prev = t;
	t = t->next;
      prev->next = t->next;
      printf("Removing node#: %ld \n", i);

void addNodes(long num, long rem) {
  if((0 >= num) || (0 >= rem) || (rem > num)){
    printf("Nothing to add\n");
  else {
    for(long i = 1; num >= i; ++i) {

      struct node* p = malloc(1 * (sizeof *p));
      if(NULL == p) {
	printf("Out of memory, will not add node\n");
      else if(NULL == st->head) {
	printf("Adding 1st node\n");
	p->nn = i;
	p->next = NULL;
	st->head = p;
      else {
	p->nn = i;
	p->next = st->head;
	st->head = p;


void convert_to_long(long* num, long* rem, const char* numptr, const char* remptr) {
  errno = 0;  /* To distinguish success/failure after call */
  char* endptr;
  long temp = strtol(numptr, &endptr, 0);
  if( ((0 == temp) && (0 == strcmp(numptr, endptr)) && (errno == ERANGE))
      || ((LONG_MAX == temp) && (ERANGE == errno)) ) {
    *num = 0;
    *rem = 0;
  else {
    *num = temp;
    temp = strtol(remptr, NULL, 0);
    if( ((0 == temp) || (LONG_MAX == temp)) && (ERANGE == errno) ) {
    *num = 0;
    *rem = 0;
    else { *rem = temp; }

void print_diagnostics(void) {
  printf("Invalid Number of Args\n");
  printf("Provide 2 arguments:\n");
  printf("\t1st arg is number of nodes\n\t2nd arg is nuber of node to be removed\n");

void printStack(){
  struct node* t = st->head;
  for(t = st->head; t; t = t->next) {
    printf("%ld, ", t->nn);
  printf("\n\t---> Total %ld nodes\n", st->tn);

[arnuld@arch64 programs]$ gcc -std=c99 -pedantic -Wall -Wextra ll.c
[arnuld@arch64 programs]$ ./a.out 10 7
Adding 1st node
10, 9, 8, 7, 6, 5, 4, 3, 2, 1,
—> Total 10 nodes


Removing node#: 4
10, 9, 8, 7, 6, 5, 3, 2, 1,
—> Total 9 nodes


[arnuld@arch64 programs]$ ./a.out 10 5
Adding 1st node
10, 9, 8, 7, 6, 5, 4, 3, 2, 1,
—> Total 10 nodes


Removing node#: 6
10, 9, 8, 7, 5, 4, 3, 2, 1,
—> Total 9 nodes


[arnuld@arch64 programs]$ ./a.out 10 1
Adding 1st node
10, 9, 8, 7, 6, 5, 4, 3, 2, 1,
—> Total 10 nodes


Removing node#: 10
9, 8, 7, 6, 5, 4, 3, 2, 1,
—> Total 9 nodes


[arnuld@arch64 programs]$ ./a.out 1 10
Nothing to add

—> Total 0 nodes


Nothing to remove

—> Total 0 nodes


[arnuld@arch64 programs]$

I don’t understand how could he say this code will not remove the Nth node from end , code behaves fine. May be he never wrote a Stack in his life. I did another mistake by trying to explain him that Stack is an implementation of a linked list and he shrugged it off. He asked me to write it using some other method (he meant another algorithm) but explain to him first and I did and he said same words again that this new method will not work either. Then I told him, there are several ways you can approach the problem that is why so many algorithms (methods 😉 ) exist and is there any specific method he is thinking of ?   He told me that he is done with me and I can leave for the day  🙂

After coming out I saw on the doors in capital letters, “ODC” was written. That means Offshore Development Center. Most of the ODCs in Indian companies are service based, there is not much development/coding involved. Companies in India, get freshers in a mass based campaigns from colleges and put them on work because they are dirt cheap to get when it comes to economy. These freshers have never worked before and most of engineering colleges here have non-ISO conforming compilers and code in their CS textbooks too does not conform to any ISO/ANSI standard. They have learned C and C++ using such resources, so it is expected that they wont know what is the definition of C language and how do ISO conforming compilers and non-forming compilers behave and what it costs in terms of bugs and maintenance.

Now lets get back to HCL, the company. What did they lose ?  I am just one programmer and what about others. There were 100 people for interview and I am sure at least 10 of them must be very good programmers. Will they get hired ?  I have been to interviews where I had to choose one answer out of 4 options given to me for   “int i = 0; prinf(“%d\n”, i++ + ++i)” and none of the options said “UB”. You were forced to choose an integer as an answer. What did I do ? I wrote “option5: UB” just below tho option 4 and of course I was not hired 🙂 . Can you imagine the quality of code in such a company, quality of hires ?   and what a nightmare it will be to fix bugs and add new features ?  HCL did not do much different from them in my interview. Whom to blame ?  Their hiring process which lets a person with low knowledge of Data-Structures interview a person with medium level of knowledge of Data-Structures. Who created such process. I know I am not that intelligent  because I have never written an AVL Tree and I can’t understand more than half of Maths Knuth writes in his Art of Computer Programming books. I just try to improve everyday and like to read definition of language, reference manuals and man pages and do what they advise. And I am still working on Knuth. In book Programming Interviews Exposed authors wrote that the more you stay away from coding and move towards business side, the more money you make and more secure your career will be (I did not know at that time that google’s machine learning department is working on making coding obsolete).  You make more money if you move from being programmer to business analyst and to software architect (A glassdoor search will tell you that it is really true). Now, how does a person who loves to program, who loves  to write code day and night, who loves to learn more languages, algorithms, more UNIX, Linux, DragonFlyBSD, NetBSD and more UNIX way of solving problems, data-structures, gcc libraries and learn more about programming paradigms,  loves Lisp and the idea behind its macros, loves to read why Eiffel was created and what problems it solves and why and how does it do it and wants to inculcate these as a part of his thinking, plus the shell he works in, and then mastering his text-editor etc. All that  just for the interest and liking. What that person can do to show his skill and get to work with MNC ? Not much if company is not even interested in having a skilled person and improving their software, tools and methodologies.  I think that challenge comes down to the programmer himself. Few companies want to improve, most don’t and you go and find those few companies.

Copyright © 2017 Arnuld Uttre, Hyderabd, Telangana – 500017 (INDIA)
Licensed Under Attribution-NoDerivs 3.0 United States (CC BY-ND 3.0 US)

Death of comp.object

January 16, 2015 at 10:40 am | Posted in Programming | Leave a comment
Tags: , , , , ,

If you are passionate about programming then you must have discussed about programming on some Newsgroup. It is really sad to see that current generation of programmers have never heard of something called USENET or a Newsgroup. Even programmers with half a decade of experience in OOP have never heard of comp.object and this is exactly the place where I learned the foundations of Object-Oriented methodology. As of 2015, comp.object is dead and full of spam, almost no useful post from couple of years, useful in the sense where some experienced OO practitioners have discussed something very fundamental and basic related to the OO Methodology. There were people like H.S.Lahman, Uncle Bob, Daniel T, Jerry Coffin, Philip and S. Perryman (and of course never forget anti-OO-zealot “topmind”). There were several others but I can only recall back few.

To tell you the truth, I never hates OOP as much as “topmind” hates OOP, I hated the very word “object” but that all changed when I started hanging on comp.object. I made very few posts and all of them were a newbie asking the answers to his questions and doubts but I read a lot of posts, really hell of a lot of threads I read and just like comp.lang.c, comp.lang.c++, comp.lang.lisp it was a very good experience in following the guidance of the folks there. Here is some brilliant advice on how to start learning OO way from one of the experts in the field (H.S.Lahman), I have not edited his advice, do not want anyone to miss his original words :), here are the abbreviations to some of the acronyms:

  • OT: Object Technology (referring to the entire field based on Object Oriented Thinking)
  • OOA: Object Oriented Analysis
  • OOD: Object Oriented Design
  • OOM:Object Oriented Modelling

> I am new to the field. I have some programming experience. I
> was wondering if anyone could recommend a good book to start.
> Thanks in advance.

Alas, OT is a big field and one book probably won’t do it for you.

I would suggest you start with a book on OOA and/or OOD. Such books
generally describe the fundamentals better. But avoid books with a
specific language or ‘UML’ in the title. Those tend to be about
manipulating syntax rather than fundamentals.

Even if you end up using a pure OOP-based process, you should still
trying some UML modeling. That’s because it provides a good expression
of the fundamentals in a very compact manner. In this case a book with
UML in the title is an advantage.

Then you need to deal with OOP. For that you need a book with a
specific language in the title. Probably two books because it is useful
to start “playing” with OT using a “purist” language like Smalltalk.
That will complete the OOA -> OOD -> OOP cycle in the most coherent
fashion. (Alas, the most popular OOPLs have made more compromises with
Turing so the transition is less obvious.) Then you will need a book on
the language de jour if you plan on doing OT professionally.

You’ll notice I didn’t recommend any specific books. In each category
there are lots and most are pretty similar. Since I haven’t read them
all, I can’t even guess which one is actually the best. Just browse
them in a book store and pick the one that seems most readable and
provides the most clarity to you.

There is nothing wrong with me that could
not be cured by a capful of Drano.

H. S. Lahman
Pathfinder Solutions
blog: http://pathfinderpeople.blogs.com/hslahman
“Model-Based Translation: The Next Step in Agile Development”. Email
in…@pathfindermda.com for your copy.
Pathfinder is hiring:

Such a beautiful explanation, so straightforward and simple advice. If you search the archives of comp.object on net then you will come across much larger information from many intelligent and experienced people. I did read lot of threads and based on that knowledge of mine I started searching more about it and came across few interesting facts and one of them is: OOP is not really about objects . Is that shocking ? You think I must be crazy to suggest that Object Oriented Programming is not about Objects. The only important building-blocks of OOP are two: one is old and other is modern. As per Old thingy, poor Alan Kay, the man who created OOP, OOP is not about objects and in facts Alan Kay regretted using the word “object”. It is about message-passing. As per the modern outlook, OOP is about how objects behave , not how they are constructed or what features they have. Inheritance or Classes are not basic building blocks of OOP.

Proof ?

Self is a Class-less OO language :). One does not need Classes or Inheritance to do OOP. It is so much interesting. Now I understood that I really did not hate OOP, I hated the way it is taught in schools/colleges, same way all students hate Math, not because Math is boring but because it is taught neither in a right way with basics nor in its practical applications to this world. subjects/Fields which are so interesting in this practical world are never ever taught in schools that way. Not only that, 2nd reason I hated OOP is, even in professional world most software engineers never ever learn beyond what they learned back in college. A student who just got out of college looks upon experienced professionals as his gurus, his new teachers who will guide him to correct path but that does not happen much. Good programmers are rare and half of them are jobless. First, generally people are lazy and 2nd the ones who are not lazy, do not get time to learn more and 3rd, not everyone likes programming, and if you belong to lazy or no-interest group then that means you should not be doing programming. Programming is about passion, without passion and interest you will never ever going to be good at it. Without interest I could never have found what Alan Kay said about OOP and I could have never come across the modern definition of an Object. I checked comp.object after several years today and what I see, comp.object is a dead group. It was sad to see no new threads on OO methodology from the the time I stopped reading. I wanted to ask what happened and in archives I see someone already asked same question and the answer to that was true but a sad one: world has changed , it is digital age, this tech world moves very fast compared to other domains like sales for example and OO landscape is no longer the same. comp.object is full of spam now, its is dead and gone, here are the reasons why it happened, think of this blogpost as a record of the history, it is difficult to find USENET Newsgroup archives these days. You can try searching it on google groups, original thread title was “OOP, this NG and you. Where is everyone?” and I have posted a link at the bottom. Tis question was posted by Alvin Ryder:

HI Guys,

A few years ago this was a pretty active NG, it seems to be rather
quiet now and I seriously wonder why?

Is it because:
a. Uncle Bob rarely visits?
b. No one programs in English speaking countries anymore?
c. No one uses OOP much anymore?
d. Everyone moved to a funkier group? If so which one?

I don’t have any real clue what do you guys think (erh, if anyone sees


[by Alvin Ryder]

Usenet in general has been on the decline for the past several years,
probably because of the rise of Web based forums. I think that’s part of it.
In addition, some/many of the regulars seemed to have moved on. Also, and
this is nothing personal, I would contend that the activity of this group
over the past 8 years has been somewhat artificially inflated due to
topmind’s involvment. If you do searches on this group regarding a variety
of topics, you’ll run across many monster threads circa 5 or 6 years ago
involving the regs and topmind. As far as OOP in general, maybe it’s reached
the point in which every thing that can be said has been said.

[by Leslie Sanford]


> “Leslie Sanford” wrote:

> Usenet in general has been on the decline for the past several years,
> probably because of the rise of Web based forums. I think that’s part of it.

Sadly. Blogs seem to be taking usenets place.

> In addition, some/many of the regulars seemed to have moved on.
> Also, and this is nothing personal, I would contend that the
> activity of this group over the past 8 years has been somewhat
> artificially inflated due to topmind’s involvment. If you do
> searches on this group regarding a variety of topics, you’ll
> run across many monster threads circa 5 or 6 years ago
> involving the regs and topmind. As far as OOP in general, maybe
> it’s reached the point in which every thing that can be said has
> been said.

Agreed on the later point. OO seems to have reached some sort of
saturation point. The time is getting ripe for the “next big thing”, but
it seems that thing still hasn’t shown its face.

[by Daniel T.]

> Responding to Ryder…
> A few years ago this was a pretty active NG, it seems to be rather
> quiet now and I seriously wonder why?

I agree with Sanford. I would add that a surprising number of
developers today don’t even know that USENET exists.

However, I would also add the militant proselytizing of the OOP-based
agile crowd. That definitely killed the old OTUG forum Rational ran —
which was busier than comp.object once upon a time — and I think it
contributed here as well.

[by H.S.Lahman]

> “H. S. Lahman” wrote in message
> I agree with Sanford. I would add that a surprising number
> of developers today don’t even know that USENET exists.

> However, I would also add the militant proselytizing of the OOP-based
> agile crowd. That definitely killed the old OTUG forum Rational ran —
> which was busier than comp.object once upon a time — and I think it
> contributed here as well.

Contributed how ??
I doubt said “crowd” drove anyone away en-masse from the comp.* groups.

OTOH, a lot of them certainly seemed to exit stage left when their claims
were challenged sufficiently often (like giving it but not taking it etc) .

Steven Perryman


> Responding to Perryman…
>> However, I would also add the militant proselytizing of the OOP-based
>> agile crowd. That definitely killed the old OTUG forum Rational ran —
>> which was busier than comp.object once upon a time — and I think it
>> contributed here as well.
> Contributed how ??
> I doubt said “crowd” drove anyone away en-masse from the comp.* groups.

Bandwidth. Not too long ago this group generated ~100 messages a day,
which takes awhile to sort through. When a lot of those messages are
about advocating a particular development process and have little to do
with the thread subject matter, people decide they just don’t have time
to sort through it all. [On OTUG people were quite specific about why
they were quitting and there was no equivalent of Topmind pulling
people’s chains. The agile crowd learned from that and aren’t as
obnoxious here, but the basic bandwidth problem remains.]

When the fraction of that 100 messages/day that are feeding the P/R
troll or are about OOP-based agile advocacy approach 50% or so, the
useful information content of the forum becomes greatly diminished and
it ceases to be worth the trouble to sort it out. (Putting people in
kill files doesn’t work well because occasionally they have something
useful to say and it also trashes the context of the messages responding
to them.)

[by H.S.Lahman]

I’ve learnt more from this NG about software development than any
other single source, but after a while people just sit in the same old
entrenched position (myself probably included), noone ever admits to
having learnt anything new, or being wrong, so it fails to become a
positive experience, it’s just another endless avalanche of ranting
and nay saying….(myself probably included).

[by Mark Nicholls ]


> Responding to Parker…

> I don’t know. “Object Oriented” as a tag line has been vanishing for
> some time. It wouldn’t help you to publish a book anymore to have OO
> in the title. I don’t see any conferences anymore with OO in the
> name. Vendors have long since stopped talking about OO. All the so-
> called OO “methodologies, the Shlaer and Mellor, the Booch etc. appear
> to be gone, and no one seems to miss them. The OO databases are
> largely gone, no one talks about OO operating systems anymore.
> “Executable UML” is mostly gone. OO was part of one giant hype cycle
> for a while, but it’s over, and now the hype has moved onto other
> things, today it’s SOA. And just like not all of the SOA hype is
> nonsense, only 90 percent, not all of the OO hype was nonsence
> either. We still have the programming languages with support for
> ADTs. But that’s about it.

I agree there is a lot less marketing hype about OO, but why is that?
How many shops outside of low-level R-T/E and RAD pipeline development
use an OOPL vs. a procedural language or FPL? The reason there isn’t any
hype about OO is because it has been broadly accepted so talking about
it has no direct marketing value. In the early ’80s how many people were
selling tools because they were procedural? They were all procedural so
there was no point in differentiating on that basis.

In addition, if one looks at the technologies de jour that are being
hyped today, like SOA, they are mostly enabled by OO techniques. Even
the most hard-core RAD DBMS tools are climbing all over themselves to
look more OO-like.

I agree OOA/D methodologies are currently on the wane temporarily
because the OOP-based agile crowd is trying to convince everyone that
all you need to know about is OOP. But that bubble is beginning to burst
and I expect OOA/D methodologies to rebound, especially because…

As far as executable UML is concerned, it has “gone” to the major
commercial software houses. There are only two translation vendors from
the ’90s that are still independent as the big houses position
themselves strategically. The 50+% productivity and reliability gains
make translation as inevitable as conversion from BAL to 3GLs was. So
translation isn’t going anywhere; everyone else will be coming to it.

> comp.object became a “soft”
> newsgroup where almost anybody could post how they felt about “getters
> versus setters” or “method versus message”, or “behaviour versus
> data”, or “tell versus ask”. We were told there was theory, but it
> was somewhere else, in a book by Abadi and Cardelli or in some paper,
> but it never seemed to get incorporated into any discussions.

But aren’t those issues fundamental to OOA/D? Don’t the justifications
of those positions represent OO methodological theory?

Unfortunately one problem with comp.object is its schizophrenia. It
combines OOA, OOD, and OOP, which are quite different things. Thus the
type theory of A&D is largely irrelevant to OOA/D discussions while OO
design issues like separation of message and method are irrelevant to
OOP. IMO far too much forum bandwidth was spent on OOP issues. There are
plenty of language and programming forums on USENET where code
refactoring discussions could live. But comp.object is one of the few
software design forums.

[by H.S.Lahman]

original thread made available by Google through Google Groups Web interface.

Copyright © 2015 Arnuld Uttre, Hyderabd, Telangana – 500017 (INDIA)
Licensed Under Creative Commons Attribution-NoDerivs 3.0 license (a.k.a. CC BY-ND)

why companies fail to hire good talent

December 27, 2014 at 6:56 pm | Posted in Programming | 1 Comment
Tags: , , , , ,

Recently Net Cloud Systems, Bangalore (INDIA) approached me for interview. They needed someone specialized in C and UNIX and since my last 5 years of industrial experience is full of C, Linux and UNIX, they must have thought I could be a good fit for the company. Gosh! how wrong they were.

I had to appear for an online test to get recruited. Unfortunately, I did not get any call from them after the test, so I got the point that I did not clear the test. That is fine by me, sometimes you win, sometimes you lose. As a computer programmer, as 24×7 coder, I have learned 2 things: 1st, it is always good to accept your own failures and move on with life by improving upon your skills. 2nd thing is the essence of this blogpost.

Net Cloud Systems had few questions as part of C programming test which actually did not belong to C language. IIRC, there were 2 or more questions not related to C language but they were put in the C test. I wrote an email to them, explained what was wrong and how could they correct the mistake and very politely I said it give would be fair if they could provide 2 marks for those questions. I got a very furious and arrogant reply in return. Down here is the full transcript of the conversation with them:

arnuld uttre 	Tue, Dec 23, 2014 at 7:18 PM
To: hr-necs@netcloudsystems.com
Dear Sirs,

Recently I gave online C test as a part of the selection process by
Net Cloud Systems. I did not get any call after that which means I did
not clear the test and company is not interested in hiring me but that
is not the subject of this email. This email is about incorrect
questions in the C language test. I wrote one mail earlier about the
same issue (to hr-exec@netcloudsystems.com) but no one replied. Hence
you are receiving this email. Here is the technical issue:

I was given 20 questions in C language and only 18 belonged to C
language, other 2 were not. C language is defined by ISO committee
and this committee publishes the definition of the C language. You
can find the official-draft of the standard online here privided by
ISO committee at their site:


According to the definition of C language, C language does not have
any function named gcvt(). gcvt() was asked in one of the two
questions. Perhaps, gcvt() is some compiler-extension know to the
person who created the test but that does not come under C language.
And there are more than a bunch of excellent high-quality compilers,
you can write same C language conforming code in all of them but
different programmers use different compilers and that has nothing to
do with C language itself but the C language test provided by you
seems to confuse between the compiler and the language. Like I said,
I skipped over theses 2 questions. I think examiner should have given
me marks for these 2 questions, else it would be unfair. I am
attaching the PDF of the latest standard for your technical team to
look at themselves.

Now it is not just about me, it is about all the
programmers/developers who appear in interviews of Net Cloud Systems,
it will be same way unfair to all of them, not to mention lack of
knowledge on your part. I hope you will look into it. Thanks for
reading my email.

 Arnuld Uttre

HR-NECS Wed, Dec 24, 2014 at 10:39 AM
To: arnuld uttre
Hi Arnuld,

First of all I would like to say that you could not clear the test.

Their is no mistake in the questions, one should have good and depth knowledge on C and Linux platform only then they can answer the questions.
Please correct your facts first and raise a question. The question that you got in the online test were not repaired by some freshers or 1-2 years of exp person.
So for you knowledge please go through some links below and a attachment.
These type of question in our company are answered by freshers or 1-2 yrs of exp employee.

Go to root terminal and type: “man gcvt”

Their are many things that keep coming in C language. It is very vast subject. People who have 10-12 years of exp only on C and Linux platform rate themselves 3.5/5 on C programming. How much would you rate your self ?

Hope you got your answers.
Thank you for reading my mail and thank you for your mail.


> Hi Arnuld,
> First of all I would like to say that you could not clear the test.
> Their is no mistake in the questions, one should have good and depth
> knowledge on C and Linux platform only then they can answer the questions.
> Please correct your facts first and raise a question.

Oh my dear Vikas….

First of all I did not mean “no disrespect”, I am just trying to tell
you something which is “not correct” about your test but it seems like
you are ready to burn me alive. Please do not let your ego come in
between you and the learning. You can either read my email and do the
search yourself or just simply can get angry and call me a dog:

I got the facts correctly, down here is the proof :


You sent me this link. Did you even read that page yourself ? It
says gcvt() is not part of ANSI C:

Not defined in ANSI-C, but included in some compilers.

You see the link you sent me itself says, it is not part of C language
but “some compilers” have it and that is what I wrote in my last
email. Hope you trust Microsoft Corporation when it says, gcvt() is
not in C language:


Here is the code from the same page and output from an ANSI/ISO
conforming C compiler:

[arnuld@arch64 c $] cat gcvt.c
/* gcvt example */

int main (void)
char buffer [20];
gcvt (1365.249,6,buffer);
puts (buffer);
gcvt (1365.249,3,buffer);
puts (buffer);
return 0;

[arnuld@arch64 c $] gcc -ansi -pedantic -Wall -Wextra gcvt.c -lm
gcvt.c: In function ‘main’:
gcvt.c:8:3: warning: implicit declaration of function ‘gcvt’
gcvt (1365.249,6,buffer);

[arnuld@arch64 c $] gcc -ansi -pedantic -Wall -Wextra gcvt.c
gcvt.c: In function ‘main’:
gcvt.c:8:3: warning: implicit declaration of function ‘gcvt’
gcvt (1365.249,6,buffer);
[arnuld@arch64 c $]

> Go to root terminal and type: “man gcvt”

I did dear before you even sent a reply, and it says “LEGACY function,
removed. Please use sprintpf() instead”. It was a POSIX function, it
IS not a POSIX or C function, it WAS a POSIX fumction and It has been
removed back in 2008, just like K&R C is deprecated where we never use
to include any information about function arguments.

> Their are many things that keep coming in C language. It is very vast
> subject. People who have 10-12 years of exp only on C and Linux platform
> rate themselves 3.5/5 on C programming. How much would you rate your self ?

I leave that rating factor upto you now since you can figure it
yourself whether gcvt() is a part of C language or not. You told me
explicitly that test was not created by some freshers. I can agree to
that because in 5 years I have met only 2 programmers who really knew
C language and they were not much experienced but very good at C and
programming in general, better than me. Majority of the software
engineers in India, with many years of experience, do not know much
about basics of C because they learned from college and college books
are just the worse part of the story of C learning. Most never
learned C after college because C is not of much help in
employability. It ain’t their fault, it is the Indian education system
and the industry requirements.

You took it personal than keeping an open mind to understand the
difference between a language, compiler and the environment in which
both language and compiler exist.

> Hope you got your answers.

I hope you got yours. I already had this answered from students of
Late Dennis M Ritchie. draft of ISO Standard is attached with this
email just like my earlier email, please do read it. Thanks for your

P.S. Software is not just about coding, it is about understanding
people first, almost half of good habits/practices of software
development/engineering are built on understanding people. Listen to
Google I/0 2009 talk on The Myth of Genius Programmer. May God bless


HR-NECS Wed, Dec 24, 2014 at 12:59 PM
To: arnuld uttre

Lets not take this further.

Thank you for the mail and the valuable information.

Thanks and Regards,

Well I did exaggerate a bit that I got answer from Late Great Dennis Ritchie‘s students 😉 . Personally, I don’t  know any of Ritchie’s students. I sure as hell learned good amount of programming from great programmers including those who have worked with Dennis Ritchie. I would not have become good at C without their mentoring. One day after this happened, I watched “the myth of genius programmer” talk given in Google I/O 2009 by 2 Google developers: Brian Fitzpatrick and Ben Collins-Sussman. They have talked one very important thing related to the great programmers around the world. They talk about how much important it is be humble, flexible and devoid of ego to become a great (or genius as they call it) programmer. How much it is important to respect your peers and their advice and suggestions when they just walk through your code. It is called peer-review and it is one of the pillars of GNU, Linux, BSD and all Open-Source software communities and they mentioned explicitly in their talk that peer-review happens in google all the time. peer-review is one of the greatest strengths behind the better quality of Open-Source software compared to proprietary software. I never had big ego, on the contrary I have always seen myself as a kind of small and short being and I have listened to knowledgeable programmers half my age. In WIPRO I was on the ODC of MasterCard and I learned more qualities there, I learned being humble, soft and and became more flexible. Not only my teammates but my team manager and project manager were great people too and I think I worked with one of the best people in my professional experience.

Completely opposite to WIPRO, do you see the ego coming out of the email from Net Cloud Systems HRD, and dancing in front of your face. Rather than looking for the facts, this HR person totally closed his mind to new information, the information which could have corrected not only their test questions but could have saved them from future embarrassment from some talented programmer. Arrogance instead of improvement. With this kind of mindset, no company can hire good talent. If a company can not understand that a programmer who knows about the cons-correctness, why int main(void) is better than void main(), is better than an ordinary programmer then you should never work for such company. I thank God I did not clear the test and they did not give me those marks I asked. Otherwise, if they have this much of attitude before hiring I wonder what would happen after one joins the company and gives some different but creative programming idea to solve some serious software problem. A good and talented programmer can not stop the flow of creativity, he would suffocate and die a slow death at a workplace where his ideas are suppressed. In 5 years, I have heard of some companies like this, companies who pay much less amount of money to programmers (freshers mostly) and kill their creative mind by bureaucracy but I never had personal experience with them, now I do. I watched Google I/O talk just next day. I thought it was just a co-incidence but now I think, it was God’s guiding hand telling me to apply for better companies, to look for places where problems are solved with different ideas than egos and where creativity flourishes. You should watch Google Talk, Brian and Ben gave a great talk, it is available at youtube. Programming is about passion and interest. Don’t work for those who can not grasp this, if you want to be a happy-coder.

The Myth of genius Programmer

Copyright © 2014 Arnuld Uttre, Hyderabd, Telangana – 500017 (INDIA)
Licensed Under Creative Commons Attribution-NoDerivs 3.0 license (a.k.a. CC BY-ND)

Emacs way – copying text

December 9, 2014 at 12:47 pm | Posted in Patterns, Programming | Leave a comment
Tags: , , , ,

In Emcas if you want to copy a region of text from one file to another then you can just press {C-space} as beginning of copying and then take your cursor to the point till where you want to copy. Then you press {M-w}, M means meta/Alt key and then you will go to the file you want to paste to and put your cursor at the place and press {C-y} and its done. It may look complicated to people who have used Notepad/Wordpad/MS-Office for many years who can just use mouse to copy-paste. Well, it is same except that using keyboard gets much easier over time, plus it kinds of wires into your nervous system. Using mouse to do something never gets easier over time, it remains same.

Now behind the scene, Emacs uses a function called (append-to-buffer) and if you look at the pseudo-code or algorithm, this is how it looks like:

(let (bind-oldbuf-to-value-of-current-buffer)
   (save-excursion                           ; Keep track of buffer.

Compare this with how it works in C:

  1. open file for reading, the file you want to copy from
  2. if there was no error in step 1 then open file for writing, where you want to paste
  3.   if there was no error in step 2 then check if mark has lower value than point
  4. use fseek to go to the mark
  5. if there was no error in step 4 then read/copy one character and write/paste it
  6. check if copy/paste stopped because of end of work or because some error occurred in copying.
  7. check if copy/paste stopped because of end of work or because some error occurred in pasting.

Here is the code in both languages:

(defun append-to-buffer (buffer start end)
  "Append to specified buffer the text of the region.
     It is inserted into that buffer before its point.

     When calling from a program, give three arguments:
     BUFFER (or buffer name), START and END.
     START and END specify the portion of the current buffer to be copied."
   (list (read-buffer "Append to buffer: " (other-buffer
					    (current-buffer) t))
	 (region-beginning) (region-end)))
  (let ((oldbuf (current-buffer)))
      (let* ((append-to (get-buffer-create buffer))
	     (windows (get-buffer-window-list append-to t t))
	(set-buffer append-to)
	(setq point (point))
	(insert-buffer-substring oldbuf start end)
	(dolist (window windows)
	  (when (= (window-point window) point)
	    (set-window-point window (point))))))))
int copy_buffer_richard(const char *w, const char *r, int pf, int pt)
  int rc = 0;
  FILE *fpi = fopen(r, "rb");
  if(fpi != NULL)
      FILE *fpo = fopen(w, "wb");
      if(fpo != NULL)
	  int len = pt - pf;
	  if(pt > 0 && pf >= 0 && len > 0)
	      /* Everything so far has been housekeeping.
		 The core of the code starts here... */

	      if(0 == fseek(fpi, len, SEEK_SET))
		  int ch;

		  while((ch = getc(fpi)) != EOF)
		      putc(ch, fpo);

		  /* ...and ends here. From now on, it's
		     just a load more housekeeping. */

		      rc = -5; /* input error */
		  else if(ferror(fpo))
		      rc = -6; /* output error */
	      else {
		rc = -4; /* probably the in file is too short */
	  else {
	    rc = -3; /* invalid parameters */
      else {
	rc = -2; /* can't open output file */
  else {
    rc = -1; /* can't open input file */
  return rc;
/* by Richard Heathfield */

Comparing both, to me Emacs Lisp code is much more easier to understand than C code. C code may look prettier but that is because of lot of extra whitespace around it where Emacs Lisp code in tightly placed. You should look at the pseudo-code of Emacs Lisp on how easier it make connection between pseudo-code and real code. It reads almost like English while C version is, as usual, strikingly odd, pseudo-code and real code look a lot different, typical of C. You may say that comparison is unfair because C is much faster comparing to Emacs Lisp and one file in Emacs Lisp was already opened and I am comparing a full fledged Lisp environment with just one single C program. Yeah, I get that, but then again Emacs Lisp code is real code directly taken from source code of Emacs while C code is just written a stand alone, small and short program. Real C program taken from a real life working software will be a lot creepy. In one glance at pseudo-code and real code, you can guess what Emacs Lisp code is doing and it is easier on head whereas real life C code will require lots of glances and will definitely be far from easier on head.

Emacs Lisp version is much more readable and this is a very important point. Ever heard of the sentence called “developer’s time is more important than the machine time” or “a computer program is written once and read 10,000 times” or “Programs must be written for people to read, and only incidentally for machines to execute (Abelson and Sussman, Preface to the First Edition, SICP) . Last quote is from one of the most respected books in computer science. If you think those ideas are quite academic or theoretical then you are completely missing the point. Good ideas are not only hard to grasp at first but it is difficult to notice the practical benefit of those too, especially if you are not having few years experience in programming. No matter how much industry is crying about changing customer requirements, good ideas are timeless. These changing customer requirements are nothing but problems that computer programmers solve everday. If, at your workplace,  you work mostly in C and C++, you must have noticed almost every company has moved to C++ while two decades back they used to develop mostly in C. More than 65% of the code in this entire world is still in C, but most of it is legacy-code.  There is a shift in the thinking that has happened. The programming world keeps on churning out new languages and almost everyone is moving towards the use of languages like C++, Java, Python, Ruby etc. Why is that ?  If you look at the new languages, you will notice they were designed more on the side of how to solve the problems in a better way, how can this new language work as a better and improved tool towards solving the problems in or of this world, and indirectly (and may be unknowingly) these language-creators have no interest solving the problems of the machine itself (space and time complexity) because problems of the machine and problems of this world are two points that lie on opposite ends. You can not brilliantly solve the one without ignoring the other by a good  amount. C++ was created to solve the problems of large scale software design and hence OO and generic programming paradigms were added. Rather than how to make it more efficient than C, the notion of how to make it better at solving larger problems was choosen. Ruby, Perl, Python and lot of others were created primarily to solve the problems that are not related to machine’s own problems. World is moving from machine towards abstraction. I call it moving to solving problems of this world, moving towards generlization and abstraction, Paul Graham calls it moving from C model to Lisp Model and he is right. Humans always evolve, no matter how many wars and world wars have been fought where humans swore to kill each other, no matter how much negativity and selfishness is there in this world, humans have always evolved and this shift from solving problems of machine to solving problems of this world is a step in further human evolution. Richard Stallman had already evolved to this level by 1984 (along with many other great progrmmers. Good thinking is timeless). He focused more on solving the problem and created this amazing piece of software called Emacs. Thanks to him again.

You should try this book by Robert J. Chassell, it is kind of addictive. When I get some free time it makes me think whether I should entertain myself with a movie or should I just enjoy reading his book  🙂

Copyright © 2014 Arnuld Uttre, Hyderabad, Telangana – 500017 (INDIA)
Licensed Under Creative Commons Attribution-NoDerivs 3.0 license (a.k.a. CC BY-ND)

How much math you need for programming

December 5, 2014 at 10:47 am | Posted in art, Hacking, Patterns, Programming | Leave a comment
Tags: , , , , , , , , ,

Whenever I wanted to learn Algorithms, Mathematics used there somehow seemed to be an obstacle. I admit my Math is not that good but it ain’t that bad either but this “ain’t bad” level of knowledge was not enough to learn Algorithms and the time and space complexities involved and comparisons of sorting and searching techniques which are at the heart of measuring performance of computer programs. I needed to learn all these and in that search I came across several articles written on Mathematics required for programming. I will explain what did I learn from these articles. When it comes to programming, most loudly known math-proponent is Steve Yegge. Here is what I have found on Math required for programming:

  1. Steve Summit notes on Math (author of brilliantly written C-FAQs)
  2. Steve Yegge who has written two articles Math Everyday and Math for Programmers
  3. Eric S. Raymond talks about how much math you need to become a Hacker
  4. Paul Graham on Math
  5. Evan Miller’s article as reply to 3 authors above
  6. Steven Noble wrote an article as reply to Evan Miller’s example of calculating fibonacci numbers

If you do not read all of those above then you will miss the intent of my blog post. As per Steve Summit, Eric Raymond and Paul Graham, you do not need to focus much on Math to become a brilliant programmer, a hacker, the most decorated word for a programmer (I do not mean Crackers who break into computers and steal private data. Read Wikipedia definition and Eric Raymond’s article on definition of a hacker). Steven Noble says you should learn a little bit of Math and Evan Miller somehow seems to agree with all of them but in a bitter way. I myself started programming just for the love of it. Since 2009, professionally, I am progrmming mostly in C, sometimes in C++ and almost always on Linux and sometimes on UNIX. My passion for programming has made me read and write code in many different languages where I had to learn different ways of thinking. Writing code is easy, thinking along the lines of the paradigm on the top of which a particular language was modeled is a tough, daunting and very time consuming task. I have always tried to do my best and got good amount of experience doing that. I think I am qualified enough to write smo comments about those articles mentioned above. So, let me tell you one thing very clearly: Computer Prgrommaing is not Math. Let me say it again, computer programming is not Math and will never be. You want to learn computer programming, then learn computer prgramming. Do not flip through Math books, read whatever is written on a particular newsgroup (comp.lang.c, comp.lang.lisp for example), read about all the software that came from GNU and use Linux distro exclusively for everday tasks (I prefer a distro with least amount of binary blob). If you are learning lot of Math because you want to learn computer programming then you are confused and headed in the wrong direction and you will not learn much of programming. Except in the speialized fields like 3D game programming etc., you only need Math as much mentioned by Steve Summit.

As computer programmers, we write programs, but why ? We write programs to solve problems of this world. That is what computer programmers do, they solve problems.

Now what does does a mathematician do ? He tries to understand nature and uses mathematics as a language to do that. Mathematics has helped solved many problems of this world. Look at what Quantum Physics, a branch of physics that has literally changed our millenia old assumptions about atoms, is heavily dependent on Math. Math is everwhere, from chemical industry to societal problems we use Statistics. Take any part of your daily life and you will see how deeply it is influenced my Math. Math has been used as the most prominent vehicle not only to understand nature but also to solve problems of this world. There is a reason for this, all these properties are just inherent in Math. I was not good at Math, so I was trying to solve the problems I was facing everday as a programmer using my intuition, common-sense, flow-charts and more other kinds of diagrams. This went on for few years and I came up with some rules and ideas on which I was building a model to solve problems, the problems that I faced everday as a computer programmer. Building up this model had one aim: to be extremely clear and very brief on what the problem is and same for solution. I was creating a model, to which you will feed a problem as input and it will produce a solution as output using English language, flow charts and lot of other kinds of diagrams I created. This model had certain assumptions, rules and conditions, which again were very clear. Clarity and simplicity were high on agenda. It was a kind of a general, abstract mechanism to be applied to problems to get solutions. Now a few months back, after I read all these Math articles I came across one more article from Evan Miller titled Don’t Kill Math which was actually written in response to Kill Math by Bret Victor.

These two article hit me very hard. First, Bret was trying to do the same thing I was trying from few years, though he was more successful than me in producing something. I could never come up with some solid model which could have been used by everyone and here is Bret who has already done that. Was I happy, yes, because I found what I was looking for and I was ready to follow Bret’s footsteps but I never did. Why ?

There was a reason I could never come up up with a solid model. I always thought it lacked something. No matter what I did and how much I worked on it, I always felt that something very fundamental and basic is lacking. My model lacked a soul, a life can not exist without a soul. Whenver I read Theory of Relativity, whenever I studied Schrodinger equation, Maxwell’s equation, Newton’s laws, Kepler’s laws, The Uncertainty Principle or Shulba-Sutras, I always felt that all those equations are complete, that they have a soul but my model does not. Both of these articles Kill Math and Dont’ Kill Math made me realize what is that soul. It is the properties of Mathematics mentioned in Don’t Kill Math. The questions Evan asked in this article and the way he has explained in very simple and basic details, concluded my search for a model. Math is a terse, short and succinct and the curtest method to solve problems and understand a phenomenon. These brutal characteristics are inherent to Math, just like soul is inherent to every being. With Math you can solve problems in a much shorter and better way than not using it. Try it yourself, read both of Kill Math and Don’t Kill Math and try to solve some problems using both methods.

This brings me to a very basic question: Why did I hate math ? If I truly do not like math then I must not like it now too, but instead it is opposite now, I like math. It was the way math was taught to me in school and college. I was taught rote-math, not real math. Same is true for hundreds of thousands of children who pass out of Indian schools. It is not their fault that they can not comprehend and hence hate Math. It is very common statement from Indian parents that “my kid does not know math, my kid hates math”. It is the fault of school, fault of our education system, not of the student.

Coming back to the primary question of whether we need Math for becoming a great programmer, this is how tho world solved its problems in beginning:


Then came Math and this is what most mathematicians did:


I have worked in software industry for more than 5 years now and this what almost all computer-programmers/software-engineers/developers do:


Evan Miller says you can become first rate hacker without using a lot of Math and I think he is right and that is in agreement with all other authors. The point he stressed was role of Math in solving problems of this world, that Math is brutally efficient in solving real world problems. As programmers, we solve problems, but if we solve problems using Math and then apply programming solutions to the mathematical model of the solution, then we can have some amazing ways of providing better solutions that will make our lives easier as a programmer (kind of side-effect):


I conclude this blogpost with these points:

  • You do not need math to become a first-rate programmer because we do not use much of Math directly. If you want to become programmer then learn programming. Computer programming is very different from mathematics, and as a computer programmer you have to focus more on how to write better programs, how to think in a particular paradigm (e.g functional, OO, Generic, Procedural, logical, declarative etc), find better ways to create software, you need to understand design-patterns, not to mention learning and using C for few years will add new dimension to your thinking. All these are not related to math in anyway. These tools we use to solve problems of this world and they are in no way related to Math e.g look at the different paradigms on which different languages are created, you need to learn these first and it will take you few years before you get a grip at them and then you can learn Math if you want. Read Introduction to Progrmming using Emacs Lisp by Roberrt J. Chassell to know how the problem of creating a customizable, self-documenting, ever-extensible real-time display text-editor was solved. Read GNU Make Manual and find out why does it need M4 and Autoconf.
  • Math is the most widely used vehicle to understand the nature and solve problems of this world. We can learn more ways of solving problems by learning mathematical methods. I myself have started studying probability because like Steve Yegge said, once you understand Math then you can look at the problem and see whether it a probability problem, calculus problem or statistical problem etc. Math is related to the nature of the problem, not nature of software, software has its own methods and tools of solving problems, keep that in mind.

I want beginning programmers to go on right path. Learning Math when what you actually want to write computer programs is a wrong, wrong path to walk on. Install a Linux distro, I prefer Trisquel for latest softwares and gNewSense if you want a solid and stable distro but with little bit outdated collection of softwares. Install Emacs using package manager on command-line and start reading Introduction to Programming using Emacs Lisp and you will get true taste of computer programming. This image shows you the world of computer programming


Copyright © 2014 Arnuld Uttre, Hyderabad, Telangana – 500017 (INDIA)
Licensed Under Creative Commons Attribution-NoDerivs 3.0 license (a.k.a. CC BY-ND)

Next Page »

Blog at WordPress.com.
Entries and comments feeds.