Saturday, April 21, 2018

Capturing and killing EiTest!!!

The goal of this post is just to document what I did to solve my issue, in case I need to do it again, and as a plus it might help others struggling with the same.

Recently one of my email servers started to get rejections from other servers, and as usual the place to go is mxtoolbox.com to check if this was blacklisted, and indeed it was by CBLAbuse, this organization checks, among others, for botnet activities including a trap to check servers sending massive connections to different things, which usually means a botnet is installed under the category of EiTest (this has been around for a couple of years and keeps spreading, there're some news this week that an organization found a way to stop it.

Ok, based on the documentation I found, most of the time the infected machine is inside the network and the instructions are mainly to detect the machine and then run and antivirus, as my server is a web server this point is completely useless, I knew the connections were coming from my server, but based on the documentation there should be a botnet running that fires up some things, and then hides again, so I'd to check for options to capture the network traffic and analize where this connections were coming from, fortunately CBL gives a sinkhole (a server used to trap this activity) so I can check if there are connections coming from my server that goes to this address, so first step check the network activity.

tcpdump -w archive.log

This command allows to log all the activity on my linux to a file, if I want to leave it running for a couple of hours it's just a matter of doing:

nohup tcpdump -w archive.log 2>/dev/null &

but dont forget to kill it after some minutes/hours otherwise it will eat up the disk!

To analyze the contents of the file you will need to download it to your machine and open it up with Wireshark.

This confirmed what I already knew, that my server was producing a lot of HTTP calls to the IP Address 192.42.119.41 (sinkhole), next step is to figure out which application is the one causing this, so after some digging I've to rely on the trusty netstat, you will find a lot of different solutions out there but after loosing too much time trying to check how to make them work I decided to use the tools I already have at hand, so I run the following command for a couple of hours:

watch "netstat -atpun|grep 192.4"

unfortunately the connections flashed too quick and most of the time they were as "TIME_WAIT" (already closed) so I've created the following script (created is a big word here, I've just copied and paste from stackoverflow as 99% of the things we do nowadays):

while true
do
        netstat -atpun|grep 192.4 | tee -a log.txt
        sleep 1
done

This small script allowed me to trace for a couple of hours, and finally this showed that the application executing this was apache server (no news here, a lot of suggestions were pointing to wordpress been infected), now the aweful part of the history, I'd to take down one by one each server to find which of the 4 domains I've was causing this, once I'd narrowed it down to 1, I'd deactivated allow the plugins and voila! the log.txt stopped showing the ip... at the end the plugin infected was "documentor", so I've removed this and reinstalled new from the repos, and this fixed the 4 days of suffering...

Hope this helps to others, if not I'm sure this will help me in the future to avoid wasting time reading a lot of generic information about the EiTest.

Tuesday, July 11, 2017

Bizagi logging: The Right Way!

In many projects one of the repetitive questions is how to produce effective logs in our processes, and how to use them in production, unfortunately the answer is: "you can use logs in development, but you need to switch them off in production or the performance will be hurt".

There are several logs in the market that we can use to include in our rules, so I implemented a wrapper of the Log4Net that you can found here: https://github.com/crossleyjuan/bizagilogger.

The good thing about this is that you can use logs in production environments without having to worry (too much) about the impact in performance, it's just a matter of configuring this right and get the best based on configuration. The bad, this is something you need to configure in the config files and not from the Bizagi Management Console, therefore turning off the traces from Bizagi console will not have any effect.

How to use the BizagiLogger

To use Log4net in your projects you will need 3 elements:
  • The Component Library: BizagiLogger
  • Configure the .config files
  • Create a Library Rule to wrap all the logs (Recommended)

Component Library: Bizagi Logger

Feel free to download the required dlls from here: https://github.com/crossleyjuan/bizagilogger/releases/tag/1.0 or download the source code and compile it yourself. This dll depends on log4net.dll, a great Apache Project that makes this possible.

Register both dlls as Component Libraries in Bizagi

Using the logger in your rules

Syntax:
 var log = Logger.getLogger(emitter);  
 log.Info(ctx, text);  
 log.Error(ctx, text);  
 log.Warn(ctx, text);  
 log.Debug(ctx, text);  

  • The emitter is a very useful filter, as you can collapse the same messages related to the same emitter, my suggestion is to use the Case Number
  • ctx is a tag variable called NDC in Log4Net, this helps to search for an additional parameter. for example you can use the name of the process

Example

 var sEmitter = Me.Case.CaseNumber;  
 var log = Logger.getLogger(sEmitter);  
 log.Info("Process A", sLogText);  

Configuration

To configure the way the logger works you can use the appenders as explained in the log4net documentation, to make this easier here's an example that could be useful in a production environment:
  • Add the configuration section:

    To be able to use the log4net options you need to let .net known that you will add some configuration elements for log4net, to do this you need to add this to your config files in the "configSections" section:

      <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" />  
    

    full section:

      <configSections>  
       <section name="bizAgiComponentLibrary" type="BizAgi.ComponentLibrary.CComponentLibrarySectionHandler,BizAgi.ComponentLibrary"/>  
       <section name="bizAgiWFESTouchPoints" type="BizAgi.WFES.TP.CTouchPointsSectionHandler,BizAgi.WFES"/>  
           <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" />  
       <!-- Uncomment for federate authentication -->  
       <!--<section name="microsoft.identityModel" type="Microsoft.IdentityModel.Configuration.MicrosoftIdentityModelSection, Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" />-->  
      </configSections>  
    

Now you can add the appenders in any part of the <configuration> tag, you can use the following as starting point

  <log4net>  
           <appender name="BufferingForwarder" type="log4net.Appender.BufferingForwardingAppender">  
                <bufferSize value="512" />  
                <lossy value="false" />  
                <Fix value="268" />  
                <appender-ref ref="RollingFile" />  
                <!-- or any additional appenders or other forwarders -->  
           </appender>  
           <appender name="RollingFile" type="log4net.Appender.RollingFileAppender">  
                <file type="log4net.Util.PatternString" value="c:\temp\Logs\WebApp-%date{yyyy-MM-dd}.log" />  
                <appendToFile value="false"/>  
                <rollingStyle value="Composite"/>  
                <maxSizeRollBackups value="-1"/>  
                <maximumFileSize value="2MB"/>  
                <staticLogFileName value="true"/>  
                <datePattern value="yyyy-MM-dd"/>  
                <preserveLogFileNameExtension value="true"/>  
                <countDirection value="1"/>  
                <layout type="log4net.Layout.PatternLayout">  
                     <conversionPattern value="%date [%thread] %-5level %logger [%property{NDC}] - %message%newline" />  
                </layout>  
           </appender>   

       <root>  
            <appender-ref ref="BufferingForwarder" />  
       </root>  
  </log4net>  

These appenders have a very good performance so you can use as many traces as you want in production with almost no impact in your performance, the reason is that the BufferingForwarder keeps chunks of information in memory and writes them in an asynchronous way to the RollingFile appender, this stores the file creating a new one every day or of the file reaches the maximum defined in the "maximumFileSize> tag.

More information of the available appenders here: https://logging.apache.org/log4net/log4net-1.2.13/release/sdk/log4net.Appender.html

do you have questions? comments? use the comment below if you need additional information

Saturday, March 28, 2015

Open Source vs Closed Source

When I started djondb some years ago I'd a different view on what an open source project means, therefore I want to share my short experience to help others to understand the hidden trade offs and the work that requires these kind of decisions. I just want to share with all of you my disappointing experience with the Open Source, my limited view on this and why I'm rolling back the decision I made 4 years ago to open the source code of djondb to the world.

With my limited experience in the open source world I must say that I was impress on how the open source projects created communities from nowhere, people willing to help, either sharing ideas or code, everybody working on a common goal, getting a nice product that solves the problem at hand. This is exactly what I want, share my passion to solve problems, the idea was good, everyone that I spoke to shared the same view, that NoSQL looked like a great idea, but the lack of transactions was something that is hard to trade, doing this on my own could be near to impossible, with my limited time and resources, also after so many years of using open source tools it was reasonable to jump in a put the code open source, so everyone will benefit from this new idea and I will bring balance to the world.

When I started the code I decided to keep it close until it was more mature, then someone pointed that as a bad idea:

No source code on Github? Only drivers?
Good move. Bye bye djondb.

So I thought, yes, Kunthar is right, I'm planning to open source the code anyway, so why not now? and started the preparation, prepare makefiles to make easier the compilation by others, prepare scripts, documentation, etc... that took several weeks, even more than some of the code components I've made for the project, but the idea worth it.

The summary of this journey as open source project was firstly pointed here: http://crosstantine.blogspot.com.es/2013/03/balance-after-5-months-of-releasing.html in 2013, exactly 2 years ago, even that I was disappoint with this already, several people pointed the failures of my endeavor, their arguments:


  • The documentation needs improvement
  • You need to promote your database more
  • Reporting issues is not easy, this could be improved.
And therefore I decided to stick with the idea and improve this things and check what happen, in 2014 I decided to focus on the code again, as I have been working too hard in two sides, coding and promoting, having more success in the coding side than in the promotion and engagement I decided to focus on what I know more, code. Making a stronger database, fulfilling the promises it will attract other developers and users to support the open source project.

The cruel reality is that users are too busy to report issues, the only one that will be a good tester is yourself, developers are too busy to get engage unless they see a money reward on this, (I got several emails saying they wanted to be partners on this, but at the end none of them had the experience or the time to really spend on this).

After 4 years of having this project in github I've produced 911 commits, the project is ranked among the 30% projects in github with more commits, even that I'm just 1 developer, from that... only 2 issues were reported and no one ever sent a pull request or patches,

So what I got from this experience? not too much, just a new experience and a bunch of biased conclusions:


  • If you don't have a stable product don't go to open source
  • People is willing to get involved if the product has created a name on its own.
  • Use your product yourself, don't wait for others to test it.
  • Don't waste time creating cross-platform compilation tools, etc. if the product got traction in one platform others will be willing to help with these things to compile it in their own platform.
  • Don't listen to others unless they have a proven history of success in the field you are trying to get into.

Some of these may sound very obvious to others, but they were not for me, and I'd to learn it in the hard way.

One of the few wise decisions I made was to hold the copyright of the code, otherwise I will be screwed, and after 4 years of work, with no help at all, I will need to forget this product, I'm glad that I read that from MySQL and others.

What now?

djondb removed the support from github, and will change it's license to closed source, as per GPL states the versions that are around still will be free as in GPL, although the copyright of the name it's still mine, the GPL allows others to use it as an open source product, so everything below the version 0.3.3 that's is out there will continue under open source, and newer versions will be closed source.

Is djondb still free to use?

Yes, djondb is free to use and you can put it in as many servers as you want, commercial license will be provided if you require it and the enterprise support will return. In exchange you will require to register to download, not too much to ask for a free database don't you think?


Monday, October 13, 2014

Bizagi - Load Balancing - Where to specify the URL used in notifications to point to the load balancer?

Bizagi sends notifications to users when they are assigned to a particular task, this is known as Automatic notifications, these notifications include the following default message:

Here is a brief information about the case 1251 (Id Case: 1251):<br>Creation Date: 10/13/2014<br>Category: Processes<br>Process Definition: Vacation Request<br><br>Click here <a href=3Dhttp://dev-mymachine/BizagiApp/default.aspx?widget=3Dactivityform&idCase=3D1251>1251</a> to view the case online.

As default Bizagi will use the machine name where it was generated as part of the link, if you want to point these links to the load balancer address you will need to configure the PROTOCOL, SERVER_NAME and APP_NAME based on your configuration, to do this you will need to add the following elements to your WebApplication/web.config file:


    <add key="SERVER_NAME" value="LoadBalancerHostName" />
    <add key="APP_NAME" value="BizagiApplication" />

This will produce the following output:

Here is a brief information about the case 1251 (Id Case: 1251):<br>Creation Date: 10/13/2014<br>Category: Processes<br>Process Definition: Vacation Request<br><br>Click here <a href=3Dhttp://LoadBalancerHostName/BizagiApplication/default.aspx?widget=3Dactivityform&idCase=3D1251>1251</a> to view the case online.

Which will point users to the load balancer instead of the cluster machine

Friday, August 22, 2014

Bizagi Sub Processes Parent / Child

Sub processes is one of the best reusable elements that you can use to support your business, they allow you to encapsulate logic and share it with other processes, but it's also one of the elements that creates coupling, the concept is fully explained here: "Loose Coupling". So how do you design sub processes that are truly re-usable and easy to maintain?

Sub-processes as a functions


The typical approach to use sub processes creates dependencies between the caller and the sub process, therefore any change to the parent may impact the child and same applies for the changes in the child, that's called highly coupled design. To work around this and create cleaner processes you should consider your sub processes as "functions" that will be called and return a result. Let me clarify this with a simple "hello world!" code. Let's imagine that you want to create a sub process to standarise the "greeting" messages, therefore you want to solve this problem:

function parent() {
    recover_name;

    child_create_greeting(); 

    print greeting message;
}
Current approach most of the times:
var name;
var greeting;
function parent() {
    name = recover name;

    child_create_greeting();

    print greeting message;
}

function child_create_greeting() {
    greeting = "Hello " + name;
}
I know this is a silly sample, but bear with me for a moment, and check the next approach:
function parent() {
    var name = recover name;

    var greeting = child_create_greeting(name);

    print greeting message;
}

function child_create_greeting(name) {
    greeting = "Hello " + name;
    return greeting;
}
The result is the same, but now your child and parent may evolve without impacting each other every time you do a change. Same concept could be applied in Bizagi processes, in the sample above I was using the variables name and greeting to share information with the parent, but what if I create a "message" envelop, that both processes share and act as communication between both? something like this:
message {
    var name;
    var greeting;
}
function parent() {
    message.name = recover name;

    child_create_greeting(message);

    print message.greeting;
}

function child_create_greeting(message) {
    message.greeting = "Hello " + message.name;
}
This creates an element where both processes will agreed as sharing mechanism and therefore they will be able to evolve without any impact, as long as the "message" element follows the original agreement. This translated to Bizagi concepts is just an entity that will hold the information that sub-process requires and where it will store the result of it's work. The responsibility of the caller process is to create the message instance, assign the parameters and then use the result stored there. Does it make sense? let's see it in action:

A real problem

The "Hello world!" sample is good way to explain basic concepts, but let's face it, it's worthless if you don't see something that it's applicable to real life problems, so here's a typical problem: "Our company requires an standard way to authorize different requests across our organization, therefore we need a reusable sub process that will be plugged in processes like, but not limited to: Travel Expenses and Vacations".

Model your sub processes as reusable process

The first step is understanding that your approval process will have different contexts and therefore it should be modeled with that independence in mind, the trick here is to be able to understand that your sub process is not approving Expenses or Vacations, it's approving a Requests (Value of the expense) or actions (Taking vacations), and therefore you will model your sub process to approve "requests" or "actions".

The Process and sub process definition

Vacation request:



Approval:



Data Models

Vacaction Request Model



Approval Sub Process Model



As you may noticed the M_Message entity is used in both processes, I added the suffix _I or _O for Input and Output to make it easier to understand. Now it's time to fill in the variables and passing them from the caller to the sub process, let's jump to code.

Passing "parameters" to the sub process

As explained above the M_Message entity will be used as communication element between the caller and the sub process, and therefore any element require by the Approval Sub process should be placed there, that includes simple data, like "Value to approve", "Date requested" and Entities (like Employee, Request Detail, etc), here I'm using "request type" to enable the sub process to handle different kinds of messages.
var reqType = CHelper.getEntityAttrib("P_RequestType","idP_RequestType","RT_Code = 'REQ'");
<VacationRequest.entApproveRequest.entRequestType_I> = reqType;
var requestMessage = String.Format("Requesting vacactions starting from {0} for {1} days.", <VacationRequest.dStartDate>, <VacationRequest.iDays>);
<VacationRequest.entApproveRequest.entRequest_I.RequestDescription> = requestMessage;
This step is just populating the "message" that we are using to pass the values, now we will set the entity to the attribute that will be referenced in the sub process, <Sub_Approval.entMessage>


Congratulations, you've just created a re-usable sub process that is truly re-usable and it wont break the parent caller every time you create a change on it, and the most important element, you can reuse it in any process without changing the sub process to adapt new callers. Love to hear your comments!

Wednesday, June 5, 2013

Parsing command line arguments in a batch program (Windows)

Everytime I'd to create a batch to do compilations, or any other kind of stuff I always bumped to the same problem, parsing command line arguments. Usually I ended up with something like:


if "%1"=="-x32" (
   Do something for x32
)
if "%2"=="-x32" (
   Do something for x32
)
if "%1"=="-x64" (
   Do something for x64
)
if "%2"=="-x64" (
   Do something for x64
)

This is really bad way to do it, so I decided to spend sometime today and solve the problem once and for all, as a result I finally have a decent way to parse arguments, and I will document it in here in case I need it again or if someone else is struggling to get this for himself. (I love getopts from Linux! but I guess Windows will keep forcing us to this kind of solutions)

 @echo off  
 setlocal enabledelayedexpansion  
 if [%1] ==[] goto usage  
 call:parseArguments %*  
 if "%x32%" == "true" (  
   echo Well done you set x32 to true  
 )  
 if "%x64%" == "true" (  
   echo Well done you set x64 to true  
 )  
 if NOT "%d" == "" (  
   echo you set the output dir to: %d%  
 )  
 GOTO Exit  


 @rem ================================================================================  
 @rem Functions  
 @rem ================================================================================  
 :usage  
 Echo Usage: %0 [-x32] [-x64] [-d output-dir]  
 goto exit  

 :getArg  
 set valname=%~1  
 echo arg: !%valname%!  
 goto:eof  

 :parseArguments  
 rem ----------------------------------------------------------------------------------  
 @echo off  
 :loop  
 IF "%~1"=="" GOTO cont  
 set argname=%~1  
 set argname=%argname:~1,100%  
 set value=%~2  
 @rem if the next value starts with - then it's a new parameter  
 if "%value:~0,1%" == "-" (  
   set !argname!=true  
   SHIFT & GOTO loop  
 )  
 if "%value%" == "" (  
   set !argname!=true  
   SHIFT & GOTO loop  
 )  
 set !argname!=%~2  
 @rem jumps first and second parameter  
 SHIFT & SHIFT & GOTO loop  

 :cont  
 goto:eof  

 rem ----------------------------------------------------------------------------------  
 :Exit  

The magic ocurrs in the "parseArguments" function, (Oh yes, batch files have functions, I didn't know that until now. This page was really helpful: http://www.dostips.com/DtTutoFunctions.php).

This function contains two things that I learned today. The first one: SHIFT, this command shift the arguments putting the second argument in front and the third in second place, and so on. This is really useful if you don't know the number of arguments, so you only need to do something like the following code to print all the arguments:


 :loop  
 IF "%~1"=="" GOTO cont  
 echo %~1
 SHIFT & GOTO loop  
 :cont  

The second think I learned is how to create dynamic variables, thanks to this guy: http://batcheero.blogspot.com/2007/07/dynamic-variable-name.html, this is useful to create the variables that you're going to use later in your batch program. This is possible because I used: setlocal enabledelayedexpansion at the very beginning of the batch program, otherwise the !thing! won't work.

So the parse arguments function just iterate over the arguments guessing if they were written as: -x32 or -d c:\temp, (in the first case %x32% will be set to true and the former %d% will contain c:\temp.

Here're the results:


> test.bat -d "c:\test dir\blah"
you set the output dir to: c:\test dir\blah
> test.bat -d "c:\test dir\blah" -x32 -x64
Well done you set x32 to true
Well done you set x64 to true
you set the output dir to: c:\test dir\blah

That's it, enjoy

Tuesday, April 16, 2013

Writing tests that works

Disclaimer

Although some of the ideas exposed in here are too obvious to some developers, I wanted to write down some of my thoughts about how the "unit" tests should be implemented in some projects, this is a discussion I'd with a fellow co-worker after reading this article: Unit Testing Myths and Practices and I think worth the time to post it in here, maybe some of you will agree with me, and some could call me dumb, anyway, I would love to read your thoughts about this and discuss about them on the comments section.

My opinion about "Unit" Tests

Several definitions of Unit tests are around Internet and in books, but here's a simple demo of what I understand by unit tests, and the way I saw unit tests implemented in several projects.

int myMethod(int arg1, int arg2) {
   return arg1 + arg2;
}
Unit test:
void testMyMethod() {

    // Testing positive numbers
    int a = 1;
    int b = 2;
    int expected = 3;
    int result = myMethod(a, b);
    ASSERT(expected, result);

    // Testing negative numbers
    a = -1;
    b = 2;
    expected = 1;
    result = myMethod(a, b);
    ASSERT(expected, result);

    // testing zero arguments
    ...

    // testing big integer values
    ...
}

As you may noticed I required 10+ lines of code to test 1 line of the business method. Now, what will happen to our test cases if we want to test the following method:

int myMethod2(Customer customer, int arg2) {
   int age = customer.age();
   return myMethod(age, arg2);
}

Now I will need to write tons of lines to check how my code will react to every single combination of customers' age or any value to be added. For example: test cases to check NULL customer, customers without age, summing up a negative age, etc, etc... Is this what I wanted in the first place? will this ensure that, if a reckless developer changes myMethod to get sum Integers instead of int, avoid null errors in the myMethod2? or do we need to rewrite all the tests that depends on the first method? do I even care about nulls been sent to myMethod?

Dependency injection will save the day!

Maybe some of you thought, "hey this guy does not know a sh**t about coding and he lacks of expertise, this is easily managed by dependency injection and method contracts". Maybe you are right, but lets check how our methods would change to ensure the usage of dependency injection to separate both concepts (bear in mind that the tests are now going to insert a dummy class that will return the proper values for each test).


[di]
IMySumClass sumClass;

int myMethod2(Customer customer, int arg2) {
   int age = customer.age();
   return sumClass.myMethod(age, arg2);
}

Done!, now we have a new problem, both tests are going to pass up the tests phase, and the error will popup in our production system. Yes, I know... the methods' contract... someone would say "the method says that it will receive X and Y and if the X and Y changes then it's a new method because it's a new contract. Ok, here we go:

int myMethod(int arg1, int arg2) {
   return arg1 + arg2;
}

Integer myMethodWithIntegers(Integer arg1, Integer arg2) {
   return arg1 + arg2;
}

Solved!, now we have 2 methods that executes the same logic, therefore we will need to mantain them as well. (But don't forget that our tests cases will need to be maintained as well, so we have now 30+ lines to keep updated). Too much work to ensure that a simple sum will work, don't you think so?

Integrity tests

Ok, let's go back to the simple code we have in the first place:

int myMethod(int arg1, int arg2) {
   return arg1 + arg2;
}

int myMethod2(Customer customer, int arg2) {
   int age = customer.age();
   return myMethod(age, arg2);
}
Unit test:
void testMyMethod() {

    // Testing positive numbers
    int a = 1;
    int b = 2;
    int expected = 3;
    int result = myMethod(a, b);
    ASSERT(expected, result);

    // Testing negative numbers
    a = -1;
    b = 2;
    expected = 1;
    result = myMethod(a, b);
    ASSERT(expected, result);

    // testing zero arguments
    ...

    // testing big integer values
    ...
}

What I found useful is to keep the tests as simple as possible and targeting the class we want to check, so instead of doing a double check of all the different options for a "a + b" operation in the method testMyMethod2, or in the testMyMethod, I would add the tests that are useful to each case, and I will remove the negative checks, zero arguments, etc that will cumbersome my code, also I won't add tests to testMyMethod2 that are not related with myMethod2 itself, therefore I will not add tests that were tested somewhere else. Let's see the sample:

void testMyMethod() {

    // Testing positive numbers
    int a = 1;
    int b = 2;
    int expected = 3;
    int result = myMethod(a, b);
    ASSERT(expected, result);
}

void testMyMethod2() {
    // Customer with age
    Customer c = new Customer("x", "y", 32);
    int expected = 33;
    int result = myMethod2(c, 1);
    ASSERT(expected, result);

    // Test a customer too old (should be rejected because of his age)
    c = new Customer("x", "y", 32);
    try {
       myMethod2(c, 150);
       FAIL("Customer age was not properly checked");
    } catch (AnyCheckedException) {
        // Accepted case
    }
}

Noticed how I removed the "negative", zero and big numbers tests, I really don't care about these cases at this level, and they will be tested by "customer" cases anyway. These kind of tests are going to be more useful and they will protect my code from unexpected "real" failures reducing the amount of work I need to write down all the test cases and covering the business cases instead of arguments misusage.

Off course, this a very simple case, and it would not cover all the "what if" questions, but it will cover what is supported by the system. The last sample test code will check not only cases related to myMethod2, but myMethod as well, this means that myMethod will be checked for every single call made from any of the classes that uses this method, and it's going to be tested in a "business" way, not in a "what if" applied to every single argument.

What usually happens in production environments is that errors, not originally tested, will popup and we will need to add them to the proper test code to ensure they will not rise again, if the error was in the "customer" level then we will add proper lines to the testMyMethod2 (or create a new one), if the problem was with our sum method not covering Big Integers, then we will need to cover it in the customer test as well, any change to the testMyMethod2 will also test myMethod in a business like scenario.

Conclusion

Every project could be different and every one will need a solution that fits to it, but I've been using this heavily and this method of "accumulative" testing have worked really well to check and correct bugs in production systems.

I would love to read your thought, please post a comment. The "you're a dumba**" comments are welcome but please try to support your ideas.