Wednesday, 29 March 2017

Logistic Regression Example


Logistic Regression   

Logistic Regression

                                                                                                                                                                                       Logistic regression is the preferred method for examining this type of data. These methods are different from what we have seen so far. At the same time, you will recognize a lot of similar features.                                                                                                                                              Let's being with an example of the type of data that lends itself to this analysis. Consider the experimental data summarized in Table. There were six large jars, each containing a number of beetles and carefully measured small amount of insecticide. After a specified amount of time, the experimenters examined the number of beetles that were still alive. We can calculate the empirical death rate for each jar's level of exposure the insecticide. These are given in the last row of Table  using                                                                                                                                                                                                            Mortality rate =    Number died / Number exposed                                                                                                                 How can we develop statistical models to describe this data? We should immediately recognize that within each jar the outcomes are binary valued: alive or dead. We can probably assume that these events are independent of each other. The counts of alive or dead should then follow the binomial distribution. (See for a quick review of this important statistical model.) There are six separate and independent binomial experiments in this example. The n parameters for the binomial models are the numbers of insects in each jar. Similarly, the p parameters represent the mortality probabilities in each jar. The aim is to model the p parameters of these six binomial experiments. Any models we develop for this data will need to incorporate the various insecticide does levels to describe the mortality probability in each jar. The alert reader will notice that the empirical mortality rates given in the last row of Table  are not monotonically increasing with increasing exposure levels of the insecticide. Despite this remark, there is no reason for us ton fit a nonmonotonic model to these data.    The aim of this chapter is to explain models for the different values of the parameters using the values of the doses of the insecticide in each jar. The first idea that comes to mind is to treat the outcomes as normally distributed. All of the ns are large, so a normal approximation to the binomial distributions should work. We also already know a lot abort linear regression, so we are tempted to fit the linear model

 What is wrong with this approach? It seems simple enough, but remember that p must always be between 0 and 1. There is no guarantee that at extreme values of the Does, this straight line model would result in estimates of p that are less than 0 or greater than 1. We would expect a good model for these data to always give an estimate of p between 0 and 1. A second but less striking problem is that the variance of the binomial distribution is not the same for different values of the p parameter so the assumption of constant variance is not valid for the usual linear regression


                                                           
                                                    

Tuesday, 28 March 2017

You must read 5 Books at age 30

   Must read 5 Books 

  Must read 5 Books

                                                                       If you are reader,then you must  to read these books.1. 1984 by George Orwell,This high school classic is well worth  a re-read.People who are either just entering or getting settled into their careers are likely to relate a little differently 1984 than they might have at 16 or 17.It is story of power and brutality and will resonate with readers as power structures start to became more visible in a newly-employed person's life.                                                                                                    2.Beijing Bastard; into the wilds of a changing china by Val Wang.Aside from difficulties in scaling the career, many quarter-life cries are spurred on by a flimsy sense of self.Wang's memoir about finding her identity in New York after moving from China is sure to connect with soul-searchers in librarians say.                                                                                                              3. Between the World and Me by Ta-Nehist Coates New York Times critic A O Scott says this book is 'essential" like water or air Winner of the national book award,Coates novel reveals a perspective on race in America that challenges everyone to reckon with the country's brutal past-both recent and distant.The book can help people just who are learning about their place in the world gain new insight.

                                                      4. Feminism Reinventing the F-word by Nadia Abushanab Higgins this book wants to make it less uncomfortable to be a feminist.The librarians say the book is essential for people moving into the wider world because so many communities are simply too diverse for regressive mindsets about equality.                                                                                                      5. Tiny beautiful Things by Cheryl Strayed  Cheryl Strayed, the ones-anonymous advice columnist,has compiled pieces of wisdom into a book the librarians call"the best" most compassionate advice about being a fully realized,empathic person in the world". At its heart,Tiny beautiful Things is a reminder that life is fraught with uncertanity and that what we call "quarter-life crises" might just be the first in a series of opportunities to ask for help.

Sunday, 26 March 2017

Rank Sum Test

Rank Sum Test  

Rank sum test

                                                  The medians test and the examination of the math test scores in Section reduce every observation of a coin toss. Specifically, the medians test judges every observation as being either above or below the pooled sample median. The actual magnitude of every observation is lost. It does not matter how far above or below the median an observation is. Does this seem like a tremendous loss of information? Rank methods meet this loss halfway: Instead of reducing all observations to binary above\below status, rank methods replace the actual observations with their order when the data is sorted.                                                                                                              To illustrate this method, consider the plant growth data from Table . All 20 of the pooled sample values are sorted in Table  from smallest to largest and identified as belonging to either the control or the treated groups. Each observation is also identified with its rank or order number, from 1 to 20 in terms of the pooled sampled. So, for example, the four smallest observation (ranked 1-4) are associated with the control group, and then the next two smallest observations (ranked 5 and 6) are in the treated group.                      


                                                             In rank methods, we replace the observations by their ranks. A large observations achieves a high rank, unlike the medians test where all observations larger than the median are treated equally.   In Table , if  both the treated and the control plants had roughly the same means, then we would also expect these two samples to have roughly the same average rank values.                                           We will return to this example and the SAS program for the rank test in Section . Before we get to that, let us again illustrate the use of ranked data and show that this is a frequently used approach to describing data that is sometimes qualitative, rather than quantitative, in nature.

Correlation Test

Correlation  
correlation
                                                                                                                                                                               

The correlation coefficient is a single-number summary expressing the utility of linear regression. the correlation coefficient is a dimensionless number between - 1 and + 1. The slope and the correlation have the same positive or negative sign. This single number is used to convey the strength of a linear relationship, so values closer to - 1 or + 1 indicate greater fidelity to a straight-line relationship.                                                                                         The correlation is standardized in the sense that its value does not depend on the means or standard deviations of the x or y values.If we add or subtract the same values from the data (and thereby change the means ),the correlation remains the same.If we multiply all the xs (or the ys)by some positive value,the correlation remains the same.If we multiply either the xs or the ys by a negative number, the sign of the correlation will reverse.                                  As with any oversimplification of a complex situation, the correlation coefficient has its benefits, but also its shortcomings. A variety of values of the correlation are illustrated. Each of these separate graphs consists of 50 simulated pairs of observations. A correlation of 0 in the upper left of no indication of a linear relationship between the plotted variables. A correlation of 0.4 does not indicate much strength, either A correlation of either 0.8 or-0.9 indicates a rather strong linear trend.
correlation

Saturday, 25 March 2017

Binomial Distribution

Binomial Distribution  
Binomial Distribution

                                                                                                                                                                                    The binomial distribution is one of the basic mathematical models for describing the behavior of a random outcome. An experiment is performed that results in one of two complementary outcomes, usually referred to as "success" or "failure". Each experimental outcome occurs independently of the others, and every experiment has the same probability of failure or success.                                  

 A toss of a coin is a good example. The coin tosses are independent of each other, and the probability of heads or tails is constant. The coin is said to be fair if there is an equal probability of heads and tails on each toss. A biased coin will favor one outcome over the other. If I toss a coin (whether fair or biased) several times, can I anticipate the numbers of head and tails that can be expected? There is no way of knowing with certainty, of course, but some outcomes will be more likely than others. The binomial distribution allows us to calculate a probability for every possible outcome.                                                                    Consider another example. Suppose I know from experience that, when driving through a certain intersection, I will have to stop for traffic light 80% of the time. Each time I pass, whether or not I have to stop is independent of all the previous time. The 80% rate never varies. It does not depend on the time of day, the direction that I am traveling, or the amount of traffic on the road.                        

If I pass through this same intersection eight times in one week, how many times should I expect to have to stop for the light? What is the probability that I will have to stop exactly six times in the eight times that I pass this intersection?                                              The binomial model is a convenient mathematical tool to help explain these types of experiences. Suppose there are N independent events, where N takes on a positive integer value 1,2.......Each experiment either result in success with probability p else results in a failure with probability 1-p for some value of p between 0 and 1.



















































Friday, 24 March 2017

Degree of Freedom

What are Degree of Freedom? 

 Degree of Freedom

                                                                                                                                       We have seen this curious expression in two setting now in the use of t-test and when using the Chi-squared test.What exactly are degrees of freedom,anyway?                                                                         More specifically, let's look at to situations where these words come up.The expression

degree of freedom
  
   
    
  (where x mean is the average of the xs),is associated with - 1 degree of freedom. Similarly,in a 2 * 2 table of counts,we always say the chi-squared statistic has one degree  of freedom .The general rule is: Degree of freedom the number of data points to which you can assign any value.Let's see how to apply this rule, when we look at the expression sum(xi-mean x)^2, there are N terms in the sum. Each term is a squared difference between an observation x and the average x mean of all the xs. Let us look at these differences and write them down.                                                                                              We have  d1=xi- mean x , d2=x2-mean x.........dN-mean x.   
                                                                                                             There are differences di, but notice that these must always sum to 0. Adding up all of the value on the right-hand sides gives a sum of the xs minus N times there average. The d must sum to 0 no matter what values the x's are.                                                                                                    So how many differences d can we freely chose to be any values we want, and still have them add up to 0? The answer is all of them, except for the  last one. The  last one  is determined by all of the others so that they all sum to 0.Notice also that it does not matter which d we call the "last". We can freely choose , N-1 values and the one remaining value is determined by all of the others. Now let us examine the use of degree of freedom when discussing the chi-squared test, As an example, let us return to the data given below.
                                                                                               
                      The chi-squared statistic measures the association of rose and columns. In the present example, is association of interest is between developing lung cancer and exposure to the fungicide. The test is independent of the numbers of mice allocated to exposure or not and the numbers of mice that eventually develop tumors or not . The significance level of the test should only reflect the "inside" counts of this table and not these marginal total.

Wednesday, 22 March 2017

Statistics

What is Statistics?  
statistics

                                                                              In an effort to present a lot of mathematical formulas, we sometimes lose track of the central idea of the discipline. It is important to remember the big picture when we get too close to the subject.                                                                                                 Let us consider a vast wall that separates our lives from the place where the information resides.It is impossible to see over or around this wall, but every now and then we have the good fortune of having some pieces of data thrown over to us. On the basis of this fragmentary sampled data, we are supposed to infer the composition of the remainder on the other side. This is the aim of statistical inference.                                                                                                                                                                                           The population is usually vast and infinite, whereas the sample is just a handful of numbers.                                                                                                                                                                       There is an enormous possibility for error, of course. If all of the left- handed people. I know also have artistic ability, am I allowed to generalize this to a statement that all left-handed people.In this case I do not have much  data to make my claim ,and my statement should reflect a large possibility of error .Maybe most of my friends are also artists .In this case we say that the sample data is biased because it contains more artists then would be found in a representative sample of  the population. The separate concepts of sample and population for numerically valued data. The sample average is a number that we use to infer the value of the population mean. The average of several numbers is itself a number that we obtain. The population mean, however is on the other side of the imaginary wall and is not observable. In fact, the population mean is almost an unknowable quantity that could not be observed even after a lifetime of study.                                                                                                                                                                                    Statistical inference is the process of generalizing from a sample of data to the larger population. The sample average is a simple statistic that immediately comes to mind. The student t-test is the principal method used to make inferences about the population mean on the basis of the sample average. 

Tuesday, 21 March 2017

Inverse Relation

                                       
Inverse Relation
                                  
Inverse Relation                                                                                  Let R be a relation from A to B. The inverse relation of R,denoted by  R^(-1)   is a relation from B to A and is denoted by                                               

  R^(-1)={( y, x) : x ɛ A, y ɛ B,(x, y) ɛ R}.                                                                                       In other words, the inverse relation is obtained by reversing each of the ordered pairs belong to R Thus,                           (x,y) ɛ R <=> (y, x) ɛ R^(-1)                                                                                           Evidently, the range of R is the domain of R and vice-versa. If A=B, then R and R are both relations on A.

Monday, 20 March 2017

THREE DIMENSIONAL GEOMETRY

THREE DIMENSIONAL GEOMETRY 

THREE DIMENSIONAL GEOMETRY
                                                     
While studying Analytical Geometry in two dimensions, and the introduction to three dimensional geometry, we confined to the Cartesian methods only. We will now use vector algebra to three dimensional geometry the purpose of this approach to 3-dimensional geometry is that it makes the study simple and elegant *.                                                                                               
In this chapter, we shall study the direction cosines and direction ratios of a line joining two points and also discuss about the equations of lines and planes in space under different conditions, angel between two lines, two planes, a line and a plane, shortest distance between two skew lines and distance of a point from a plane.Most of the above results are obtained in vector from. Nevertheless, we shall also translate these results in the Cartesian from which, at times, presents a more clear geometric and analytic picture of the situation.

Sunday, 19 March 2017

Complex Problem

  Complex Problem  
COMPLEX PROBLEM

                                                                                    Complex ProblemIn World War II, when the war operations had to be planned to economize expenditure, maximize damage to the enemy, linear programming problems came to the forefront.          The first problem in linear programming was formulated in 1941 by the Russian mathematician, L. Kantorovich and the American economist, F. L. Hitchcock, both of whom worked at it independently of each other. This was a well-known transportation problem. In 1945, an English economist, G. Stigler, described yet another linear programming problem - that of determining an optimal diet.                                                          In 1947, the American economist, G. B. Dantzig suggested an efficient method known as the simplex method which is an iterative procedure to solve any linear programming problem in a finite number of steps.                                                                        L. Katorovich and American mathematical economist, T. C. Koopmans were awarded the Nobel prize in the year 1975 in economies for their pioneering work in linear programming. With the advent of computers and the necessary software, it has become possible to apply linear programming model to increasingly complex problems in many areas.

Friday, 17 March 2017

The Analytical Geometry

Analytical Geometry    

geometry,analytical geometry,analytic geometry,analytical,introduction analytical geometry,coordinate geometry,what is analytic geometry,analytical geometry the plane class-02,analytic geometry definition,analytic geometry explanation,what does analytic geometry mean,what does analytic geometry stand for,analytic geometry (concepts/theories),what is the meaning of analytic geometry

                                                                                                                       French mathematician and philosopher Rene Descartes (1596-1650) is credited with the invention of this new branch of geometry which is after his name also called as Cartesian Geometry.                                                                                                                               The fundamental idea of the analytical (or coordinate) geometry is the representation of points, called coordinates in the plane, by ordered pair of real numbers and the representation of lines are curves by algebraic equations. Coordinate geometry has enabled the integration of algebra and geometry since algebraic methods are used to represent and prove the fundamental properties of the functions corresponds to particular types of lines and analysis of various geometrical properties of these curves. Due to these features, coordinate geometry is considered as a technique for analysis of geometric figures based on certain axioms suggested by physical consideration such as straight line, parabola, circle, hyperbola, etc.


Thursday, 16 March 2017

Graphical Method to solve LPP model

Graphical Method    

standard maximization,simplex method minimization,davv university,delhi university,kuk university.,simplex method in bangla,simplex method -1,maximizaton type,how to solve a linear programming problem,linear programming in bangla,bsc mathematics,bsc mathematics lectures,aschorjo knowledge,honours math,full,simplex method linear programming khan academy,bba,how to solve lpp class 12,maximization problem,mca,cs,cwa,cpa

                                                                                                                                                          

An optimal, as well as feasible solution to an LP problem is obtained by choosing from several values of x1,x2,.....,xn, the one set of values that satisfies the given set of constraints simultaneously and also provides the optimal [maximum or minimum] value of the given objective function.                                                                                                                                                        For LP problems that have only two variables, it is possible that the entire set of feasible solutions can be displayed graphically by plotting linear constraints on a graph paper to locate the best [optimal] solution. The technique used to identify the optimal solution is called a graphical solution approach or technique for an LP problem with two variables.                                                                      Although most real-world problems have more than two decision variables, and hence cannot be solved graphically, this solution approach provides a valuable understanding of how to solve LP problems involving more than two variables algebraically.                                                                                                   In this chapter, we shall discuss two graphical solution methods or approaches (i) Extreme point solution method (ii) Iso-profit (cost) function line method to find the optimal solution to an LP problem.


Wednesday, 15 March 2017

SIMPLEX METHOD

SIMPLEX METHOD          
standard maximization,simplex method minimization,davv university,delhi university,kuk university.,simplex method in bangla,simplex method -1,maximizaton type,how to solve a linear programming problem,linear programming in bangla,bsc mathematics,bsc mathematics lectures,aschorjo knowledge,honours math,full,simplex method linear programming khan academy,bba,how to solve lpp class 12,maximization problem,mca,cs,cwa,cpa
                                                                                                                                                   
 

Most real-life problems when formulated as an LP model have more than two variables and are too large for the graphical solution method. Thus, we need a more efficient method to suggest an optimal solution for such problems. In this chapter, we shall discuss a procedure called the simplex method for solving an LP model of such problems. The method was developing by G B Dantzig in 1947.                                                                                             The simplex is an important term in mathematics that represents an object in n-dimensional space connecting n + 1 points. In one dimension, a simplex is a line segment connecting two points; in two dimensions, it is a triangle formed by joining three points; in three dimensions, it is a four-sided pyramid having four corners.                               The concept of the simplex method is similar to the graphical method. In the graphical method, extreme points of the feasible solution space are examined to search for an optimal solution at one of them. For LP problems with several variables, we may not be able to graph the feasible region, but the optimal solution will still lie at an extreme point of the many-sided, multidimensional figure [called an n-dimensional polyhedron] that represents the feasible solution space. The simplex method examines the extreme points in a systematic manner, repeating the same set of steps of the algorithm until an optimal solution is reached. It for this reason that it is also called the iterative method.                                                                           

Since the number of extreme points [corners or vertices] of feasible solution space and finite, the method assures improvement in the value of the objective function as we move from one iteration [extreme point] to another and achieve the optimal solution in a finite number of steps and also indicates when an unbounded is reached.

Thursday, 9 March 2017

The triangle of Mars

 The triangle of Mars  
mars,triangle,surviving mars,landing on mars,life on mars,mars triangle jupiter,boy from mars,penny on mars,veronica mars,surviving mars gameplay ep 1,mars and the drama triangle,triangle volant,triangle red coral,mars (planet),trianlge,martian triangle,mars landing,middle mars
                                                                                                            

   The triangle of Mars is formed by the lines of life, head, and the hepatica. The shape and the position of the grate triangle must be considered by themselves, although it contains the upper, the middle, and the lower angle, which three points will be dealt with later. when the triangle is well-formed by the lines of the head, life, and health, it should be broad and enclose the entire plain of Mars. In such case it denotes breadth of views, liberality, and generosity of spirit, such a person will be inclined to sacrifice himself to further the interests of the whole, not the unit.                                                                                                  If, on the contrary, it is formed by three small wavy, uncertain lines, It denotes timidity of spirit, meanness, and cowardice. Such a man would always go with the majority, even against his principles. When the second formation of the triangle it has for its base the line of the sun, the subject will then have narrow ideas but great individuality and strong resolution. Such a sign, from the very qualities it exhibits, contains within itself the seeds of worldly success.

Wednesday, 8 March 2017

THE LINE OF INTUITION

THE LINE OF INTUITION  
how to use intuition,how to trust my intuition,happiness,self actualization,life coaching,actualized,actualized.org,leo gura,how to listen to your gut,listen to intuition,personal development,trust my gut,listen to my heart,trust your heart,trust your instincts,instincts,gut,gut feelings,self help,psychic intuition,action,intuitions,decision,worry,holistic,chakra,medicine (field of study)

                                                                                                           The line of intuition is more often found on the philosophic, the conic, and the psychic, than on any other of the seven types. Its position on the hand is almost that of a semicircle from the face of the Mount of Mercury to that of the Mount of Luna. It sometimes runs through or with the hepatica but can be found clear and distinct even when the hepatica is marked. It denotes a purely impressionable nature, a person keenly sensitive to all surroundings and influences, an intuitional feeling o presentiment for others, strange vivid dreams and warnings which science has never been able to account for by that much-used word, coincidence.

Tuesday, 7 March 2017

THEORY OF FLUXIONS

THEORY OF FLUXIONS                                                                                                       The origin of the  integral Calculus goes back  to the early period  of development of Mathematics .The great  development of method of exhaustion in the early  period was obtained  in the worksof Eudoxus (440 B.C) and Archimedes(330 B.C).The theory of calculus began in the 17th century .In 1965,Newton began his work on the Calculus described by him as the theory of fluxions and used his theory in finding the tangent and radius of curvature at any point on a curve. Newton introduced the basic notion of inverse function called td by he anti derivative or the inverse method of tangents.                                                                                        The theory of calculus began in the 17th century.In 1665,Newton began his work on the calculus describe by him as the theory of fluxions and used his theory in finding the tangent and radius of curvature at any point on a curve.

Latin Cubes

What is Latin Cubes? The practical applications of Latin crops and related designs are factorial experiment.factorial experiments for me...