zeszyty naukowe - Uczelnia Warszawska im Marii Skłodowskiej

Transkrypt

zeszyty naukowe - Uczelnia Warszawska im Marii Skłodowskiej
ZESZYTY
NAUKOWE
Uczelni Warszawskiej
im. Marii Skłodowskiej-Curie
KWARTALNIK
4 (46) / 2014
SCIENTIFIC PAPERS
of the Maria Skłodowska-Curie Warsaw Academy
QUARTERLY
4 (46) / 2014
RADA NAUKOWA / THE SCIENTIFIC COUNCIL
Kazimierz WORWA – przewodniczący, Maciej TANAŚ – sekretarz, prof. prof. Jewgienij
BABOSOW, Olga BAŁAKIRIEWA, Henryk BEDNARSKI, Ramiro Délio BORGES de MENESES, Lev BUKOVSKÝ, Wiktor CZUŻIKOW, Nadieżda DIEJEWA, Józef FRĄŚ, Karl GRATZER,
Dieter GREY, Janusz GUDOWSKI, Maria Luisa GUERRA, Renata HOTOVA, Dietmar JAHNKE,
Tatiana JEFIMIENKO, Mariusz JĘDRZEJKO, Norbert KANSWOHL, Henryk KIRSCHNER,
Anatolij KOŁOT, Wiesław KOWALCZEWSKI, Zbigniew KRAWCZYK, Vladimir KRĈMERY,
Natalia KUTUZOWA, Stefan M. KWIATKOWSKI, Zbigniew LANDAU, Ella LIBANOWA, Jelena MAKAROWA, František MIHINA, Kiyokazu NAKATOMI, Witalij PRAUDE, Michaił
ROMANIUK, Jurij S. RUDENKO, Gregory SĘDEK, Władimir SUDAKOW, Jan
SZCZEPAŃSKI, Janusz TANAŚ, Besrat TESFAYE, Zachraij WARNALIJ, Nonna ZINOWIEWA
ZESPÓŁ REDAKCYJNY / EDITIONAL TEAM
Zdzisław SIROJĆ – redaktor naczelny, Katarzyna BOCHEŃSKA-WŁOSTOWSKA – zastępca
redaktora naczelnego, Małgorzata MIKULSKA – sekretarz redakcji, Ivan BALAŽ, Jerzy CHORĄŻUK, Jakub Jerzy CZARKOWSKI, Maciej DĘBSKI, Krzysztof KANDEFER, Jurij
KARIAGIN, Gustaw KONOPACKI, Edyta ŁYSZKOWSKA, Maciej SMUK
REDAKTORZY TEMATYCZNI / THEMATIC EDITORS
Prof. prof. Józef FRĄŚ, Marek GRENIEWSKI, Mariusz JĘDRZEJKO, Zbigniew KRAWCZYK,
Zdzisław NOWAKOWSKI, Jan SZCZEPAŃSKI, Maciej TANAŚ
REDAKTORZY STATYSTYCZNI / STATISTICAL EDITORS
Brunon GÓRECKI, Tadeusz MIŁOSZ
REDAKTORZY JĘZYKOWI / LANGUAGES EDITIORS
Jęz. pol. – Katarzyna BOCHEŃSKA-WŁOSTOWSKA, Katarzyna TOMASIŃSKA, jęz. ang –
Małgorzata CZAPLEJEWICZ-KOŁODZIŃSKA, Małgorzata ŻYCKA, Marcin ŁĄCZEK,
Aleksandra PENKOWSKA, jęz. ang., ros. i ukr. – Susanna KITAJEWA, jęz. ang i hiszp. – Franciszek BIAŁY, jęz. ang., hiszp.i port. – Ramiro Délio BORGES de MENESES, jęz. ang. i franc. –
Anna PENKOWSKA, jęz. ros. i białorus. – TAMARA JAKOWUK, jęz. niem. – Barbara KAZUBEK, jęz. ukr. – Bazyli NAZARUK, Jurij KARIAGIN, jęz. słow. i cz. – Ivan BALAŽ, jęz. włoski –
Ireneusz ŚWITAŁA, Daniele STASI
REDAKTOR TECHNICZNY/ TECHNICAL EDITOR
Adam POLKOWSKI, [email protected]
DRUK I OPRAWA / PRINTERING AND BINDING
SOWA Sp. z o.o.
Ul. Hrubieszowska 6a
01-209 Warszawa, tel./fax /22/ 431 81 50
e-mail: [email protected]
WYDAWCA/ PUBLISHER
Uczelnia Warszawska im. Marii Skłodowskiej-Curie
03-204 Warszawa, ul. Łabiszyńska 25
tel./fax /22/ 814 3778; e-mail: [email protected]
© Copyright by Uczelnia Warszawska im. Marii Skłodowskiej-Curie
Wersja papierowa pisma jest wersją pierwotną
/The paper version of the journal is the initial release/
Nakład 50 egzemplarzy
ISSN 1897-2500
Contents / Spis treści
DISSERTATIONS ‒ ARTICLES ‒ STUDIES
ROZPRAWY ‒ ARTYKUŁY ‒ STUDIA
Urszula TYLUŚ
Poverty of children and youth as a contemporary social problem ................................ 7
Ubóstwo dzieci i młodzieży jako współczesny problem społeczny
Marcin ŁĄCZEK
Promoting community cohesion in English education
settings on the example of Barnfield South Academy in Luton.................................. 17
Promowanie spójności społecznej w angielskich placówkach oświatowych
na przykładzie Barnfield South Academy w Luton
Mirosław CIENKOWSKI, Tomasz WOŁOWIEC
Market reactions of entities to income tax and managerial decisions ........................ 33
Rynkowe reakcje podmiotów gospodarczych wobec podatku dochodowego i decyzji zarządczych
Piotr SKŁODOWSKI, Anna BIELSKA
The role of soils in sustainable development of rural areas ........................................ 63
Rola gleb w zrównoważonym rozwoju obszarów wiejskich
Rafał GRUPA
Total Quality Management as a philosophy of quality management ......................... 73
Total Quality Management jako filozofia zarządzania jakością
Maciej KIEDROWICZ
The importance of an integration platform within the organisation .......................... 83
Znaczenie platformy integracyjnej w organizacji
Olena K. YAKYMCHUK
Concept of managing regional budgets during transition
to sustainable self-development ................................................................................. 95
Koncepcja zarządzania budżetem regionalnym w warunkach przejścia do zrównoważonego rozwoju
Zdzisław SIROJĆ
Social capital management in the contemporary city .............................................. 109
Zarządzanie kapitałem społecznym we współczesnym mieście
4
Contents / Spis treści
Tatyana KROTOVA
Evolution of model. The origins of simulation in design.......................................... 117
Ewolucja modelu. Początki symulacji w projektowaniu
Zbigniew WESOŁOWSKI
Application of Computer Simulation for Examining
the Reliability of Real-Time Systems ........................................................................ 145
Zastosowanie symulacji komputerowej do badania niezawodności systemów czasu rzeczywistego
Gustaw KONOPACKI
Modelling software testing process with regard to secondary errors ........................ 163
Modelowanie procesu testowania oprogramowania z uwzględnieniem błędów wtórnych
Kazimierz WORWA
Analytical method for choosing the best software supplier ....................................... 177
Analityczna metoda wyboru najlepszego dostawcy oprogramowania
Kazimierz WORWA, Gustaw KONOPACKI
Analysis of the PageRank algorithm ......................................................................... 195
Analiza algorytmu PageRank
Sergey F. ROBOTKO
The generalized criterion of the assessment
of efficiency and optimization of stocks in logistic systems ..................................... 211
Uogólnienie kryterium oceny efewktywności i optymalizacji zasobów w systemach logistycznych
Tomasz WOJCIECHOWSKI
Control measurements of bridge structures .............................................................. 225
Kontrolne pomiary mostów
Anna BIELSKA, Piotr SKŁODOWSKI
Analysis of the factors that affect determination of the field-forest boundary ........... 239
Analiza czynników wpływających na projektowanie granicy rolno-leśnej
Recenzenci Zeszytów Naukowych Uczelni Warszawskiej
im. Marii Skłodowskiej-Curie / Reviewers of Scientific Journals ............................. 253
Informacje dla autorów / Information for Authors ................................................... 255
DISSERTATIONS
ARTICLES
STUDIES
ROZPRAWY
ARTYKUŁY
STUDIA
ZESZYTY NAUKOWE
Uczelni Warszawskiej im. Marii Skłodowskiej-Curie
Nr 4 (46) / 2014
Urszula Tyluś
University of Natural Sciences and Humanities in Siedlce
Poverty of children and youth
as a contemporary social problem
Introduction
Poverty affecting children is an alarming phenomenon, deeply set in the process
of pauperization of societies around the globe. The problem of an unfavorable
existential situation concerning Polish children is even more common compared
with other European countries. The claim is based on the analysis of relative
poverty among children in developed countries shown by UNICEF Report Innocenti Research Centre in 2012. More worrying news about the situation of Polish
children can also be heard from the Main Statistical Department. According to it, in
2011 and 2012 there was deprivation found among children and the youth. During
the examined period about 10.5% of young generation until the age of 18 comprised members whose families indicated the rate of daily expenses below the officially stated level of poverty. Families whose daily expenses were lower than the
minimal amount of existence included 9% of youngsters under the age of 18. In
consequence, children and the youth constituted around 31% population at risk of
extreme poverty, which meant that statistically every fifth person belonged to this
group (based on the Main Statistical Dept. 2011, 2012). The statistical data only
illustrate the quantity dimension of this phenomenon without revealing details
about living in poverty. They just enable one to look objectively into the demographic structure ofthe society but at the same time they provide basis for further
reflection, without revealing details of children and the youth living in poverty.
8
Poverty of children and youth as a contemporary social problem
Consequences of children’s growth in poverty conditions
The growth in the world’s research on poverty problems was recorded as significant in the second half of the 20th century, particularly in the 60s. Vast debates and
deliberations on the concept of poverty, the culture of poverty and social exclusion
were held within various sociological, anthropological and historical disciplines also
affecting social policy. “Different America. Poverty in the United States” by Michael Harrington deserves special attention here. Based on his 7-year research, the
author out‒lines social verification referring to ‘the economic sub-world and the
waste of social abundance related to all those who do not participate in life and
have been banned from the process of social development. A lot was contributed to
the early thoughts on social inequalities and multi-aspect problems of poverty by a
French historian, Michel Mollat. He gave lectures at the Sorbonne on psychological
and social results of poverty. These resulted in his two-volume work entitled
‘Etudes surl’historie de la pauverete’ which was published in 1974. Researching on
studies the culture of poverty was also carried out by Oscar Lewis. In his works
such as ‘Sanches and his Children’, ‘Naked Life’, ‘The Martinez Family’ he revealed
dissimilarity of group behaviours found in poverty societies as a form of deviant
actions, which seem to be an inevitable consequence of status of the poor societies
inhabiting Mexico, India, Puerto Rico and Cuba. It is worth mentioning that Lewis’s
theory was long rejected as it opposed contemporary mental trends which denoted
that every individual is be tempted to gain his/her life’s success.
It was already 15 years to follow when the poverty culture theory was finally
appreciated even though it still has its critical response. Many researchers in this
field claim that the behaviour of the poor as described by Lewis is a universal feature found among all poor groups in modern societies. Studying the wide academic
empirical achievement of those times, we may not exclude the economic studies of
poverty made by an American sociologist, David Matz. In his significant work
“Deliquency and Drift” the reprint edition, considering socio-cultural aspects and
poor societies, he distinguishes 3 concentric spheres: the largest (comprising all the
poor according to the relative rule of low incomes), a narrower circle (the poor
living on social care support and burdened with the mark of moral decay), finally
Urszula Tyluś
9
the last circle (consisting of the unemployed or part-time workers creating their own
society and moral decay behaviours).
A number of publications on this issue underline a vivid interest in the problem
of poverty worldwide as well as put a light on its diagnosis and allow further open
social discourse. Life for children and youth in poverty is closely linked with multidimensional phenomenon of existence with reference to different spheres of life. It
is determined by inability to fulfil one’s own needs which give a chance to fully
participate in social life and thus, it curbs and restricts human development at all
stages of personal and social life. A definite family surrounding a student growsing
referred to as a process of socialization, makes a huge influence on the quality and
way of his functioning in other circumstances. According to Lewis (1970), the life
pattern is passed on from one generation to another by the poor in modern societies. Poverty according to the author, has its structure and self-defence mechanism,
which allows an individual to survive. It is an unchanged and permanent way of life.
Moreover, poor living conditions work out a specific system of behaviour and values which can be regarded as a unique culture in on anthropological way. Living and
growing into the culture of poverty leads to an inability to fulfil social needs and
condemns such a group of people to a margin and the youngest citizens are grief
stricken. All that results in the feeling of inferiority and indifference and getting
accustomed to a worse life. Such marginal state may not only lead to a growing
feeling of justified grievance against the surrounding reality, growing consciousness
about the harm, but first of all if teaches to live differently. Poor social conditions
which restrict young people from growing into an adulthood cause not only a loss
of opportunities in carrying out plans, but also frequent limitation of full consciousness in order to look for better life offers. According to my research, children who
come from poor societies perceive and experience their unfavourable educational
and social conditions.
The standard of living conditions among the families is a factor to enter the area
of social deprivation, providing lowered chances in future life, including the sphere
of education, which results in entering the area of decay at different intensity. Following G. Baczewski’s idea, most likely to reach a higher level of education are the
10
Poverty of children and youth as a contemporary social problem
children of wealthy and well-educated parents. Children who represent lower social
circles rarely succeed at school, compared with their peers who come from upper
social circles. Even at a comparable level of education achieved, wealthy parents’
children graduate with better results and well recognized schools which increase
chances on the job market (Baczewski, 2004). Other authors who study the area of
poverty stress out the fact that children who are born among Polish families with
low incomes have significantly lower chances for further education and life prospects. It is poverty regarding villages and little towns first of all which may become
a barrier to get an access to education (Zahorska, Putkiewicz, 2001).
The problem of poverty in generation transmission of deprivation has been studied by W. Wawrzywoda-Kruszyńska (2008, 188) who claims that “…in Poland,
probability to get a higher education by a son or a daughter of a father with lower
level of education is comparably eleven times lower rather than by decendants of
a male with higher education. It is significant in comparison to Germany and Finland where the barriers are the lowest and such probability is only lowered twice”.
Numerous views on people’s lives in unsatisfactory and severe conditions confirm the belief that these are not only economic factors to influence planning
a reasonable life perspective but also the societies may shape characteristic
behavioral patterns for their children who they will influence, forming a generation
culture of poverty.
Studies based on my survey, carried out among a selected group of children and
youth who came from poor families, do not show equal behaviours which denotes
that they should not be generalized. However, they reflect certain psychological and
pedagogical school difficulties which may be referred to as characteristic for a group
of students representing poverty area. They comprise the following:

In most cases students who show a sense of helplessness in problematic situations. In case of difficulties while solving a problematic situation they expect to receive help from the outside, otherwise they may quit the appointed
task. They also do not believe in their own abilities and perceive themselves
as less valuable and generally worse, compared with their peer. Undoubtedly,
such behaviour result; in from the way the children were brought up and
Urszula Tyluś
11
unconscious adverse effect of the family surrounding which means that the
process of socialization here led to helplessness, social addictions and lowered self-esteem. It may result in an inability to change a real life situation
for the child or the young person in the future.

Stigmatization and isolation within a peer group, most frequently among the
youth, leads towards alienation and isolation of those who come from poor
families. It further results in contacts among the peer with a similar social
and emotional status.

The feeling of isolation and lack of enthusiasm to make contacts within class
students have been observed among these children.

Peer relations ‒ hips have their different shapes because most of the diagnosed children and youth experience and use violence towards their colleagues

A characteristic feature is the contrast of the students’ behaviour who come
from poor family backgrounds. There has been observed a division into shy,
quiet, frequently active ones and those who reluctantly speak up as well as
hyperactive ones who want to draw the attention of the teacher and the
peer.

This inadequate behaviour is combined with low school achievements.
A typical measure here is a common attendance at compensating school
classes or even resitting the year. Consequently, they end up school education at middle school level

A common feature found among these children is school truancy, especially
at the level of middle school.

The longer the process of socialization in poor families, the less educational
it appears and aspirations about the future job and personal planning are
lowered.
School is significant in a young person’s life as it may support integration and
copy the process of social exclusion. Works in this field point to the fact, that
school may frequently become the only place where the process of socialization is
carried out in the right way. School environment needs to adopt its actions to an
12
Poverty of children and youth as a contemporary social problem
individual, which denotes stimulating each student’s activity, perceiving and not
accepting the fact of school failures, peer conflicts reflected by the exclusion of an
individual by his/her peers poor families. Specific actions taken up by the school
towards students from in order to equal peer chances should be focused not only
on its material side to compensate for the lack of temporary needs but also possess
future investment character giving chances to support and develop the student’s
consciousness which would create chances for a better life.
Conclusion
School must be obligated to give equal opportunities and ease the distance of social diversity, being at the same time a mediator in creating social and cultural educational chances for children and the youth, having in mind that the activity of school
environment has influence on each student’s individual biography.
Adaptation difficulties found among the students of poor backgrounds in school
environment often demand psychological and pedagogical intervention. Considering the areas above, it is suggested to develop high level of existential awareness.
Thus it denotes to perceive self-position and deciding on self-operating strategy by
using creativity, dealing with different situations, ability to perceive crucial situations
in order to solve the problem rather than escape from it. These are also distant and
close aims based on worthy life experience which certainly help in developing selfmanagement skills. The latter is not an easy activity. It is reflected in appropriately
managed action strategy, developed and shaped since early childhood on the level of
different situations, problems and daily events. Such a strategy can be arranged
consciously by modeled and evocative upbringing background organised in different conditions by the school.
In order to develop these abilities there are rational and consciously planned
activities needed to build up self-esteem among the students. It is an important
factor of human behaviour which influences planned aims and activities carried out
at every stage of an individual life showing at the same time the level of functioning
in cognitive, moral and social, emotional and perspective sphere. A child with positive attitude towards life believes in his/her own abilities. Thus such a child under-
Urszula Tyluś
13
takes new tasks which seem to be a new challenge and an area to improve competences and get the feeling of satisfaction. A child with low self-esteem receives new
tasks with fear as a threat, caused by the lack of his own abilities and competences.
That is why the latter avoid such tasks in order not to be criticized, feeling shame
and fear. Many psychologists point out a big importance of self-evaluation in the
process of forming personality. They claim that any disorders may be caused on the
basis of negative messages from those who take part in the process of education. It
means that self-esteem is shaped by a number of factors which are anchored in
family surrounding such as conditions and the process of upbringing (upbringing
conditions stimulating children’s growth and self-development regarded as a safe
world. On the contrary, negative and unfavourable upbringing conditions are
a danger for the psychical life of a child); family living conditions as well as the
parents’ education. Hence, conscious and planned actions taken up by the school
aimed towards building strong people self-esteem linked with improving the process of self-introduction, self-consciousness and improving social competences
supporting the perspective of positive existence are a necessity and a pedagogical
challenge at the same time. School syllabuses, referred to the upbringing process on
each level of education, should comprise aspects and categories helping selfupbringing and self-development of all students no matter what social background
they represent.
References
[1] Baczewski G., Zjawisko ubóstwa w Unii Europejskiej i w Polsce jako uwarunkowanie
rozwoju ekonomicznego, PWN,Warszawa 2004. (The phenomenon of poverty in the EU
and Poland as a condition of the economic growth).
[2] Lewis O., The Culture of Poverty, in: Lewis O., Anthropological Essays, New York,
Random House, 1970.
[3] Warzywoda-Kruszyńska W., Bieda dzieci w polu zainteresowania Unii Europejskiej,
Warszawa, 2008. (Poverty of children as an interest of the European Union).
14
Poverty of children and youth as a contemporary social problem
[4] Zahorska M., Putkiewicz E., Społeczne nierówności edukacyjne – stadium sześciu gmin,
Instytut Spraw Publicznych, Warszawa 2001. (Social education a line qualities – a
studywithin 6 communes).
[5] Raport UNICEF 2012. Ubóstwo dzieci: Najnowsze dane statystyczne ubóstwa dzieci
w krajach rozwiniętych, Innocenti Research Centre, Florencja.
Summary
Key words: young generation poverty, social problem, personal consequences, social consequences,
social support
The aim of the presented text is to illustrate dangerous consequences of children’s growth in deep
family poverty. Poverty reveals as a significant and current social problem involving personal and
social consequences. The sphere of povery involves children and youth who, in fact the least responsible for the state's situation as well as their families, are submitted to undergo multi-dimensional
deprivation. Their life drama is based on health hazards and apreatly on danger of preserving
poverty patterns and negative phenomena accompanying. Thus a reasonable solution is to abandon
poverty by conscious education investment in the young generation, starting from an early age,
conscious help of the local surrounding as well as social care support provided by the state. By
breaking low education consciousness, which denotes gaining low education qualifications, with the
support of the state and local authorities we may shape ambitious patterns and attitudes with
regard to life prospects of the children and youth who come from poverty backgrounds.
Ubóstwo dzieci i młodzieży jako współczesny problem społeczny
Streszczenie
Słowa kluczowe: ubóstwo młodego pokolenia, problem społeczny, konsekwencje personalne,
konsekwencje społeczne, wsparcie społeczne
Celem zaprezentowanego tekstu jest ukazanie niebezpiecznych konsekwencji dorastania dzieci
w warunkach ubóstwa materialnego swoich rodzin. Ubóstwo to ważny i aktualny problem spo-
Urszula Tyluś
15
łeczny, niosący ze sobą konsekwencje personalne i społeczne. W obszar ubóstwa uwikłane są
dzieci i młodzież, które najmniej odpowiedzialne za sytuację kraju i rodziny muszą podlegać
wielowymiarowej deprywacji. Dramatyzm ich życia polega na zagrożeniu zdrowia oraz w dużym
stopniu na niebezpieczeństwie utrwalania się niedostatku i towarzyszących mu wielu niekorzystnych zjawisk wykluczających z życia społecznego. Realnym przedsięwzięciem jest konieczność
podejmowania systemowych działań, włączając do wsparcia egzystencjalnego dzieci z rodzin ubogich zakres racjonalnych przedsięwzięć środowiska szkolnego i lokalnego.
ZESZYTY NAUKOWE
Uczelni Warszawskiej im. Marii Skłodowskiej-Curie
Nr 4 (46) / 2014
Marcin Łączek
Warsaw University
Promoting community cohesion in English education
settings on the example
of Barnfield South Academy in Luton1
Introduction
The main focus of attention of the current paper will be the coexistence of diverse ethnic groups and languages within one unit – education unit of Barnfield
South Academy (henceforth BSA) in Luton (former South Luton High School –
until August 31st 2007), England. It is both a multilingual and multicultural (i.e. of
distinct racial, cultural and ethnic identities) secondary school, a member of the
Barnfield Federation, sponsored by the local college – Barnfield College (rated in
the top10% of FE colleges in Britain), with students’ age range varying from 11 to
16 (i.e. key stage 3 and key stage 4).
Teaching and learning in English schools
Before I dwell on the index phenomenon, a short remark on the organisational
aspect of both teaching and learning in English schools (the same system operates
in Wales and Northern Ireland). Thus, after the pre-school period (up to the age of
5), pupils attend infant, primary and secondary school, respectively (the latter at the
age of 11). Any students wishing to continue their education, and depending on the
1
This paper was read at the international conference Languages and cultures in contact – then and now
at Polonia University in Częstochowa on March 27th 2009.
18
Promoting community cohesion in English education
settings on the example of Barnfield South Academy in Luton
number of GSCEs they have managed to achieve (students can also study vocational GSCEs and BTEC diplomas), they may go on to further education colleges
(offering typically vocational or technical courses) or take a higher level of secondary school exams known as A Levels (these are required for university entrance in
the UK). Education is obligatory for all students up to the age of 16.
A typical English school day (on the example of BSA) starts with the statutory
morning register (done by the form tutors either electronically or in writing), followed by period 1 and 2 after which there is PfL and a 15-minute break. There are
two break slots so that KS3 and KS4 students (Years 7, 8, 9, 10 and 11) do not have
it all at one time. Then, we have period 3 and 4 followed by half an hour lunch
(three slots this time – some classes might have their break in the middle of the
lesson). Last, but not least, period 5 with yet another obligatory register taken by the
teaching staff (in practice, teachers take the register during each lesson). Schools are
free to set their own hours; at BSA these fall between 0800 hours and 1600 hours
with the morning register commencing at 8:45 am and period 5 ending at 3 pm.
After that time students can participate in extra curricular activities. This routine can
be changed in some special circumstances (be they mock or proper exams for instance). School year usually begins first week of September and ends third week of
July; each term consists of two half-terms separated from one another by a weeklong break. Surprising as it may seem, we have no bells at BSA and there are no
extra breaks in the meantime either: students swap the classrooms, if necessary,
every 60 minutes (this is how long each period lasts for) only upon teacher’s consent. All Barnfield South Academy students are obliged to wear school uniforms
and identity cards and anyone arriving at school in non-uniform clothing is sent
home to get changed (as is the case in most English education settings). Students
need to attend school regularly and their attendance is monitored by an attendance
officer and, then, supervised as well by an EWO appointed directly by the government. Parents can face fines or even imprisonment if their child’s attendance falls
below the expected (by the government) 95%. Unlike Polish students, their English
counterparts do not receive any marks throughout the school year (but for the
Marcin Łączek
19
exams, of course, marked by independent exam boards) and are automatically promoted to the next year. Underachievers or those showing inappropriate instances of
conduct are penalised by sitting a detention, being placed on report, seclusion or
permanent exclusion – in contrast to many Polish schools, the latter does not remain in the field of theory only. But now to the main question.
The coexistence of diverse languages within BSA
The concept of language
For one thing, language, a dynamic process used to present either semantic or
pragmatic meanings (in linguistics we can also talk about meaning that derives from
syntax), is perceived as a collection of varieties2 and for another thing as a system of arbitrary symbols used for human communication 3. Construed to be external expression of our
internal thoughts, this truly structured social, cognitive, and linguistic enterprise 4is used
whenever a need to realize discourse meaning occurs in the continuous and changing contexts of our daily life, the most important of which is the communication of
information.
Sometimes though it might happen that language used has no information content at all but is merely used to keep channels of communication open; such phenomenon has been termed phatic communionand Brown and Yule give the following example: [w]hen two strangers are standing shivering at a bus-stop in an icy wind and one
turns to the other and says ‘My goodness, it’s cold’, it is difficult to suppose that the primary intention of the speaker is to convey information. It seems much more reasonable to suggest that the
speaker is indicating a readiness to be friendly and to talk 5. Indeed, the value of the use of
language embedded in our cultural mythology has enabled the human race to develop
diverse cultures, each with its distinctive social customs, religious observances, laws, oral traditions,
patterns of trading, and so on6. At times, though, its interpretation can diverge from
what their producer has intended; the major reason being different repertoires. In
2
3
4
5
6
Blommaert 2007, 13.
Hatch 2001: 1.
Ibidem, 292.
Brown – Yule 2007, 3.
Ibidem, 2.
20
Promoting community cohesion in English education
settings on the example of Barnfield South Academy in Luton
fact, according to Brown and Yule (2007) people only produce language when given
such an opportunity with interpersonal use of language prevailing over primarily
transactional one and, it needs to be emphasised, the linguistic resources they possess do differ as there are no two human beings able to refer to such resources in
the same manner even if they are inhabitants of the same country, born and bred,
and, therefore, able to speak or write the same language at the same level of proficiency. In addition to that, it is the same repertoire that might be blamed once misunderstandings between communicating parties occur as they allow people to deploy
certain linguistic resources more or less appropriately in certain contexts 7. Also, the authors of
the Sapir-Whorf hypothesis claim that categories of language that categorise things
grammatically can influence (but do not necessarily determine) the way people construe the world with the three key terms in the formulation of the index hypothesis
being’language,’ ‘thought,’ and ‘reality’’8. It has been captured by Chafe (2008) with the
metaphor of a flowing stream; there are, in fact, two streams, each with very different qualities: the stream of thoughts and sounds:
It is instructive to compare the experience of listening to a familiar language with listening to
a language one does not know. In the former case it is the thoughts, not the sounds, of which one is
conscious, but in the latter case only the sounds. Sounds are easier for an analyst to deal with,
simply because they are publicly observable. Thoughts are experienced within the mind, and for that
reason are less tractable to objective research. On the other hand, thoughts enjoy a priority over
sounds in the sense that the organization and communication of thoughts is what language is all
about. The sounds exist in the service of the thoughts, and follow whenever the thoughts may take
them. It is the thoughts that drive language forward9.
Having said that, let me add that according to Halliday (1976), language always
serves three overarching functions: ideational (representing people, objects, events,
and states of affairs in the world), interpersonal (expressing the writer’s or speaker’s
attitude to such representations), and textual (arraying the above in a cohesive and
appropriate manner).
7
8
9
Blommaert 2007, 13.
Johnstone 2008, 43.
Chafe 2008, 673.
Marcin Łączek
21
The concept of bilingualism
Bilingualism, understood as the phenomenon of competence and communication in two languages, describes the ability to communicate in two languages alternately (as opposed to, for example, trilingualism) and is a specific term used to
describe a multilingual person (or a polyglot). Following Lam (2003), one can differentiate between individual bilingualism versus societal bilingualism, perfect bilinguals versus imperfect bilinguals, dominant bilinguality versus balanced bilinguality,
simultaneous bilingualism versus successive bilingualism (also referred to as second
language acquisition), additive bilingualism versus subtractive bilingualism, complementary bilinguality where one language is used in a few domains (such as work
or home) and another in other (such as education for instance) so the register is
limited (diglossia is a term used to describe the stable use of two linguistic varieties
for different domains of language use in a society) or receptive bilingualism which is
the ability to understand both languages but different components are mastered
(e.g. speaking or writing only); receptive bilingualism, in turn, should be distinguished from mutual intelligibility which stands for lexical and grammatical similarities of two languages. Individuals may also have bidialectal (bidialectalism: communication in more than two dialects of the same language) or biscriptual abilities
(reading more than one script of the same language) within one language.
But even if someone is highly proficient in two or more languages, their so called
communicative competence which involves not just mastery of the linguistic system, but
the ability to use language in conjunction with social practices and social identities in ways which
others in the community will recognize to perform a myriad of social activities such as engaging in
small talk, making transactions, joking, arguing, teasing, and warning10 may not be as
balanced. As a result, we can speak of compound bilinguals usually fluent in both
languages and coordinate bilinguals who use the first (more dominant) language to
think through the second language (a sub-group of the latter is the subordinate
bilingual – typical of beginning second language learners). Also, on the basis of
memory organization, bilingualism can be coexistent (languages kept separate),
10
Bhatia et al. 2008, 5.
22
Promoting community cohesion in English education
settings on the example of Barnfield South Academy in Luton
merged (used interchangeably) and subordinate (L2 learned on the basis of L1) as
“[p]ragmatically most students from a minority language need to function in the
minority and majority language society” 11. Following this, one can differentiate
between language maintenance when non-dominant or minority groups develop
competence in their own L1s while they learn the dominant or official language,
language shiftwhich is the gradual loss of use of the L1 in a particular population or
language death implying the complete loss of speakers of a language.
English as an additional language bilinguals
There are currently 156 EAL bilingual students (these are not special education
needs pupils in any way though) learning within the BSA premises and representing
roughly 19% of the school total population (808 individuals on roll as of March 4 th
2009; this figure is in a constant state of flux) with the top-three foreign languages
spoken (out of 25) being:
1.
Urdu – 24 %
2.
Polish – 22 %
3.
Turkish – 12 %.
Other languages spoken within the school also include: Akan, Arabic, Bengali,
Butt, Chichewa, Dutch, Farci, Flemish, French, Greek, Gujarati, Lithuanian, Ndebele, Nyanja, Pashto, Portuguese, Punjabi, Romanian, Shona, Slovakian, Swahili and
Yoruba12.
The EAL students’ level of English, acquired and/or improved in naturalistic
language acquisition settings13, includes all stages of language development whose
main aim is to assess the productive skills of speaking and writing and receptive
11
12
13
Baker 2003, 351.
Foreign languages spoken in Luton (March 16th 2009): Albanian, Arabic, Bengali, Cantonese, Czech,
Dari, Dutch, Estonian, Farsi, French, Ga, Georgian, German, Greek, Gujarati, Hindi, Hungarian,
Italian, Kurdish, Latvian, Lithuanian, Lingala, Mandarin, Ndebele, Pahari, Polish, Portuguese, Punjabi, Pushto, Russian, Serbo-Croat, Slovak, Slovenian, Spanish, Shona, Somali, Swahili, Sylhetti, Tamil,
Tagalog, Tigrinya, Turkish, Twi, Ukrainian, Urdu, Yoruba, and Vietnamese.
Explained by Lightbown and Spada (1999) in Majer (2003) as those in which the non-native speaker
is exposed to the target language in social or professional contexts (e.g. interaction at home, work or
school, for instance).
Marcin Łączek
23
skills of listening and reading. These, in turn, for bilingual pupils in English education settings range from the beginner level, stage 1 (a child can understand very
basic classroom instruction and has a very limited command of English), stage 2 (a
child can cope with simple classroom instruction but has difficulty with extended
talk and limited literacy skills) stage 3 (a child can engage in still limited written and
spoken tasks and is in need of support to match peer group level), stage 4 (a child
can operate successfully with no need for additional support) to being fully competent.
Strategies to support EAL students
The statutory statement on inclusion is to provide effective learning opportunities for all pupils with its three principles being: setting suitable learning challenges,
responding to pupils’ diverse needs and overcoming potential barriers to learning
and assessment for individuals and groups of pupils. It makes high demands on
bilingual learners too. The strategies available at BSA to support EAL pupils (some
ethnic minority students originate from post-colonial countries where English, both
during and after the empire’s heyday, continued to play an institutionalised role and,
thus, speak English fluently) into real learning experiences with regard to the English National Curriculum are (in appropriate order after the child’s arrival):

language assessment (use of multilingual labels)

a welcome booklet for pupils to provide visual help in their mother tongue
and introduction to school layout

pairing the new arrival with a supportive friend to help meet their immediate
needs: buddy system14

information factsheets regarding Section 444(1) Education Act 1996 (attendance issues) for parents

14
additional ESOL classes
This reminds me of Lave and Wenger’s (1991) legitimate peripheral participation –a process that
could be characterised metaphorically as an apprentice-master relationship in which newcomers at
first can participate only in less-engaged or peripheral ways, but as they interact with old-timers, they
eventually participate more and more fully until they themselves become old-timers.
24
Promoting community cohesion in English education
settings on the example of Barnfield South Academy in Luton

ILPs with the information on how teachers and their parents/carers can
contribute towards their child’s achievement (simplifying texts, recording information via concept maps and writing frames, key words, use of bi-lingual
dictionaries, practise spellings etc.)

assistance with homework (EAL department enrichment time)

EAL support during subject matter domains15 and a range of sets to choose
from

inviting members of the pupil’s community into school on a regular basis

GCSE in the native tongue or ESOL skills for life certificate as a way
of gaining an alternative to standard qualification.
In the further course of this study I shall look in more detail at what ethnic minority students actually attend BSA and, subsequently, how community cohesion is
promoted.
The coexistence of diverse ethnic groups within BSA
Ethnic minority students
BSA does not stand out from other English education settings and British teachers, it has to be admitted, face challenges every day in the classroom with groups
from diverse backgrounds (be they linguistic, ethnical, racial or cultural) such as
migrant workers or refugees from all over the world. Although English is the main
language of the United Kingdom with Welsh, Scottish Gaelic, Lowland Scots, Cornish and Irish all constituting minority languages, we cannot forget about immigrant
15
By subject matter domains we mean the heterogeneous (i.e. multinational and multicultural) subject
classroom where the immigrant child is placed in a class with native-speaking pupils. Ellis (1985)
quoted in Majer (2003) distinguishes four other types of input and interaction alone: the foreign language classroom (EFL) in which English may be the second language of a fairly homogenous, monolingual group; the second language classroom (ESL) where English is the dominant language of the
speech community; the bilingual classroom where L2 students receive instruction through both L1
and L2 and, last but not least, the immersion classroom where a class of L2 students is taught subject-matter via L2. According to Majer, the ESL-EFL polarisation (geographical, educational, socioeconomic and political) has best been reflected with the help of the terms BANA (British, Australian,
North American methods) and TESEP (tertiary, secondary, primary methods of ELT) referring to
the state language education as it is practised largely in countries where English is a second or foreign
language. The index concepts used in literature correspond roughly to the English speaking world
and the English-learning world, respectively.
Marcin Łączek
25
languages brought in in recent decades (since World War II) by migrating communities and used in the UK on a daily basis such as Punjabi, Pahari-Potwari, Gurajati,
Hindi, Urdu, Bengali, Polish, Cantonese, Mandarin, Spanish, Greek, Arabic, Portuguese, French or Turkish, to name just a few. Indeed, those index groups’ children
have brought with them, both during mid-term and non-routine admissions, not
only specific needs and differing learning experience but also a complex mix of
abilities.
At BSA16 a significant number constitutes ethnic minority students – let me
quote the percentage representing their major backgrounds 17:
16
17

Asian or Asian British: Bangladeshi – 6%

Asian or Asian British: Indian – 1%

Asian or Asian British: Other Asian – 1%

Asian or Asian British: Pakistani – 11%

Black or Black British: Black African – 7%

Black or Black British: Black Caribbean – 4%

Black or Black British: Other Black – 2%

Chinese or Other Ethnic Group: Other Ethnic Group – 3%

Mixed: Other Mixed – 2%

Mixed: White and Asian – 3%

Mixed: White and Black Caribbean – 3%.

White: British – 43%

White: Irish – 1%

White: Other White – 6%
cf. Luton’s population (184,356 inhabitants in total) by ethnic groups according to 2001 census:
White: British – 62%, White: Irish – 3.8%, White: Other White – 2.3%, Mixed: White and Black Caribbean – 1.6%, Mixed: White and Black African – 0.3%, Mixed: White and Asian – 0.7%, Mixed:
Other Mixed – 0.6%, Asian or Asian British: Indian – 4.2%, Asian or Asian British: Pakistani –
10.9%, Asian or Asian British: Bangladeshi – 4.9%, Asian or Asian British: Other Asian – 0.9%,
Black or Black British: Black Caribbean – 4.2%, Black or Black British: Black African – 2.0%, Black
or Black British: Other Black – 0.6%, Chinese or Other Ethnic Group: Chinese – 0.7%, Chinese or
Other Ethnic Group: Other Ethnic Group – 0.4%; this diverse mix of cultures can best be seen during Britain’s biggest one day carnival: Luton International Carnival.
Current detailed framework of ethnic coding according to the 20th UK Census (commonly known as
Census 2001) conducted nationwide in the United Kingdom on Sunday, April 29th 2001 is given in
the appendix.
26
Promoting community cohesion in English education
settings on the example of Barnfield South Academy in Luton

White: Turkish (including Turkish Cypriot) – 3%.
To the above list18, one more figure – that of students refused to continue their
education at the academy should be added with the percentage of as much as 3.
One team, one purpose, one standard: promoting community cohesion
The Education and Inspections Act 2006 inserted a new section 21(5) to the
Education Act 2002 introducing a duty on the governing bodies of maintained
schools to promote community cohesion; although most schools already do this
and consider it to be a fundamental part of their role it has become obligatory as of
September 1st 2007. As a consequence, many local authorities are working to promote community cohesion and place it high on their agenda as do Ofsted.
So how is community cohesion then, the key contributors of which might be integration and harmonization (perfectly grasped in the above mentioned BSA principles) promoted at the school? How do the ethnic minority learners who come from
a variety of linguistic, ethnical, racial and cultural backgrounds become socialized
into school, community or society and made feel equally appreciated and valued?
Rather than through impromptu action, a conscious plan of community development events and activities has been undertaken at BSA since the beginning of
the academy existence presented (in a chronological order) below:

school nurses present to Year 7 and 8 girls

local police present to Years 8-11 in assembly on knife crime

pupils from Year 7-8 and 9-10 visit the Luton Mayor and his chambers at
town hall (arranged after His Worship the Mayor and local Councillor’s
visit)

pupils attend Pride in Luton regeneration project (Safer Neighbourhood
Team meetings for the south of Luton initiative)

EAL students attend Languages Day at Streetfield Middle School in Caddington

18
Year 10 peer mentors’ attend Hillborough Chill Zone youth club
Talking of the year groups the above division looks the following: Year 7 – 19%, Year 8 – 21%, Year
9 – 19%, Year 10 – 20%, Year 11 – 19%.
Marcin Łączek
27

St Patrick’s and St George’s Day celebration

tackling racism in the academy meeting (a house system competition initiative)

Luton Churches Education Trust deliver a sex and relationship workshop to
Year 8 pupils

HMP Wellingborough representative delivers a drugs and crime presentation to Years 7-8

The Anne Frank Trust deliver the Free To Choose Citizenship programme
to all Year 10 pupils

Safer Luton Partnership representative presents to Rowan House group on
Pakistani/ Kashmiri culture; Luton Churches Education Trust present to
Elm House group on British culture and Youth Worker representative presents to Beech House group on Caribbean culture

Year 7 pupils deliver money raised at the St Patrick’s Day celebration to Luton Irish Forum

academy police surgery starts

Afro-Caribbean Performance Arts Company delivers a dance workshop to
pupils in Years 7-9

Dance Theatre Company starts weekly dance and drama workshops with
pupils during enrichment

Eid, Polish Independence Day and Christmas cake and carol celebrations

community exchange programme with Indonesia

BSA Black History celebration evening

school nurses present to Year 8

Remembrance Day

Christmas Hamper Competitiondonated to local charities and organizations
in Luton: Luton Accommodation and Move on Project (LAMP), Keech
Cottage Children’s Hospice and Noah Enterprise.
The above listed events and activities, together and individually, have helped the
academy to, first and foremost, create an opportunity to communicate and then
build positive relationships with the local community. They have also been an occa-
28
Promoting community cohesion in English education
settings on the example of Barnfield South Academy in Luton
sion to celebrate different cultural events, the aim of which is to reduce young people’s negative attitude and break down barriers by learning more about different
cultures, and, thus, helped build a much needed sense of trust between them. An
understanding and tolerance among pupils, as a result, have gradually been developed with new skills learnt and good practice shared. Last but not least, they have
certainly helped educate young people on choices and promote each individual
child’s self esteem.
Conclusion
Having new pupils arrive from all over the world can undoubtedly be very stimulating to the school and provide great opportunities to learn about, among other
things, different languages, cultures and religions. It can bring a freshness to class
and form group interaction and allow pupils to take on specific responsibilities in
helping new children settle as quickly as possible so that the child can learn effectively and enjoy schooling. It might be the case, though, that language minority
children in mainstream schools are withdrawn from lessons in majority language
[h]owever, ‘withdrawn’ children may fall behind on curriculum content delivered to others not in
withdrawal classes. There may also be a stigma for absence. A withdrawal child may be seen by
peers as ‘remedial’, ‘disabled’, or ‘limited in English’.19
For these reasons, it, therefore, seems to be a must to promote in a pluralist society community cohesion and multiculturalism: “the ideal of equal, harmonious, mutually
tolerant existence of diverse languages, and of different religious, cultural and ethnic groups”20 –
starting yet in the classroom.
Bibliography
[1] Baker, Colin. 2003. Foundations of bilingual education and bilingualism. (3rd edition.)
Clevedon: Multilingual Matters Ltd.
[2] Barnfield South Academy in Luton internal documentation.
19
20
Baker, op. cit., 197.
Ibidem, 402.
Marcin Łączek
29
[3] Bastiani, John (ed.). 1997. Home-school work in multicultural settings. London: David
Fulton Publishers.
[4] Blommaert, Jan. 2007. Discourse. Cambridge: Cambridge University Press.
[5] Bhatia, Vijay K. – John Flowerdew – Roodney H. Jones. (eds.). 2008. Advances
in discourse studies. Abingdon: Routledge.
[6] Brown, Gillian – George Yule. 2007. Discourse analysis. Cambridge: Cambridge
University Press.
[7] Carter, Ronald – David Nunan (eds.). 2006. The Cambridge guide to teaching English
to speakers of other languages. Cambridge: Cambridge University Press.
[8] Chafe, Wallace. 2008. “The analysis of discourse flow”, in: Schiffrin et al., pp.
673-687.
[9] Cole, KimMarie – Jane Zuengler. 2008. The research process in classroom discourse
analysis. Current perspectives. New York: Lawrence Erlbaum Associates.
[10] Cunningham-Andersson, Una – Staffan Andersson. 2002. Growing up with two
languages. A practical guide. (2nd edition.) London: Routledge.
[11] Hatch, Evelyn. 2001. Discourse and language education. Cambridge: Cambridge
University Press.
[12] http://www.barnfield.ac.uk/southacad/vision.php (date of access: 4 Mar.
2009).
[13] http://www.communities.gov.uk/communities/ (date of access: 4 Mar. 2009).
[14] http://www.luton.gov.uk/internet/social_issues/population_and_migration/l
uton%20observatory,%20census%20and%20statistics%20data/population%20
and%20households%20information (date of access: 9 Mar. 2009).
[15] http://www.luton.gov.uk/internet/social_issues/population_and_migration/l
uton%20observatory,%20census%20and%20statistics%20data/census%20info
rmation (date of access: 9 Mar. 2009).
[16] http://www.opsi.gov.uk/acts/acts2002/ukpga_20020032_en_1 (date of access: 8 Mar. 2009).
[17] http://www.opsi.gov.uk/acts/acts2006/ukpga_20060040_en_1 (date of access: 8 Mar. 2009).
30
Promoting community cohesion in English education
settings on the example of Barnfield South Academy in Luton
[18] Johnstone, Barbara. 2008. Discourse analysis. (2nd edition.) Oxford: Blackwell
Publishing.
[19] Lam, Agnes “Bilingualism” in: Carter – Nunan, pp. 93-99.
[20] Lave, Jean – Etienne Wenger. 1991. Situated learning: legitimate peripheral participation (learning in doing: social, cognitive and computational perspectives). Cambridge: Cambridge University Press.
[21] Majer, Jan. 2003. Interactive discourse in the foreign language classroom. Łὀdź:
Wydawnictwo Uniwersytetu Łὀdzkiego.
[22] Schiffrin, Deborah – Deborah Tannen – Heidi E. Hamilton. (eds.). 2008. The
handbook of discourse analysis. Oxford: Blackwell Publishing.
Summary
Key words: community cohesion, English educational system, language, bilingualism, EAL
(English as an additional language), ethnic minorities
The article presents the phenomenon of social cohesion in the English educational institutions
/the example Barnfield South Academy in Luton/.
The author presents strategies developed for students in line with the idea: one team, one goal,
one standard in promoting cohesion of the local community.
Promowanie spójności społecznej w angielskich placówkach oświatowych
na przykładzie Barnfield South Academy w Luton
Streszczenie
Słowa kluczowe: spójność społeczna, angielski system oświatowy, język, bilingwizm, EAL
(angielski jako język dodatkowy), mniejszości etniczne
Marcin Łączek
31
Niniejszy artykuł, po przedstawieniu angielskiego systemu oświatowego a następnie pojęcia języka oraz bilingwizmu, ukazuje fenomen spójności społecznej w angielskich placówkach oświatowych (na przykładzie Barnfield South Academy w Luton).
Autor przywołuje strategie opracowane dla uczniów z angielskim jako językiem dodatkowym
w myśl idei szkoły jedna drużyna, jeden cel, jeden standard w ramach propagowania
spójności na gruncie społeczności lokalnej.
ZESZYTY NAUKOWE
Uczelni Warszawskiej im. Marii Skłodowskiej-Curie
Nr 4 (46) / 2014
Mirosław Cienkowski
The Maria Skłodowska-Curie Warsaw Academy
Tomasz Wołowiec
School of Economic and Innovation in Lublin
Market reactions of entities to income tax
and managerial decisions
Introduction
One of the most controversial issues in economics is the answer to the question
whether by lowering income taxes it is possible to stimulate economy to quicker
growth rate. In 2000 two authors, J. Agnell and M. Persson published the paper in
which they checked the effects of tax reduction on economic growth rate based on
endogenous growth model, checking in this way the potential effect of Laffer 1. The
authors verified potential effects of tax reduction among 16 OECD countries and,
based on simulation on an econometric model, they reached the conclusion that the
best growth effects can be obtained by lowering taxes in Sweden, Finland and
Denmark2, that is countries with the highest tax and para-tax burden. The authors
also indicated that the effect of economic growth acceleration depends on adopted
policy concerning public expenditure. It is generally possible only in conditions in
which after the period of public expenditure acceleration they are experiencing
1
2
Agnell J., Persson M., On the Analytics of the Dynamic Laffer Curve, Department of Economics, Uppsala
University, The Institute for International Economic Studies, Stockholm University, Uppsala Stockholm 2000.
Ibidem, pp. 19-20.
34
Market reactions of entities to income tax and managerial decisions
a slow-down. If we assume that time after time the share of government expenditure in GDP increases, then causes lowering taxes does not affect the economic
growth rate3. This conclusion seems obvious. Tax reductions, according to Laffer’s
concept, are made in order to increase dynamics of private sector development at
the costs of public sector, which requires limiting the public expenditure growth
rate (or even their stagnation or reduction).
The height of tax rates and the nature of income tax rates table can be a factor
affecting job turnover.This issue has been analyzed by two economists: W.M. Gentry and R.G. Hubbard, who wrote a book on this topic 4. They analyzed relations
between tax rates, tax roundness5 and Job Turnover based on Taxism model used
by the National Bureau of Economic Research. As the authors pointed out, job
turnover reacts both to rate changes and roundness of tax table (measures of progressiveness). “We estimate that a 5% reduction of extreme tax (...) increases the
likelihood of moving to a better job by 0.79%, while decreasing the tax system
roundness measure by 3.12% (the value of one standard deviation) increases the
likelihood of moving to a better job by 0.86% (...). For married men these results
are slightly higher”6. This means that tax reductions encourage to look for a better
job as employees are certain that possible additional pay will not be covered with
higher tax rate. These results show that tax reductions positively motivate employees and this influence is statistically significant. We can also formulate a conclusion
that the less progressive the tax system, the greater inclination to look for a better
job. The authors also stated, quoting another research of theirs 7, that the roundness
(progressiveness) of tax system has relatively large negative influence on entrepreneurial decisions, such as entering a new market 8.
3
4
5
6
7
8
Ibidem, p. 20.
Gentry W.M., Hubbard R.G., The Effects of Progressive Income Taxation on Job Turnover, National Bureau
of Economic Research, Working paper 9226, September 2002.
The measure of tax system roundness is ‘the difference between weighted average (...) of extreme tax
rates in various successful states of tax situation and extreme tax rate seen as benchmark, when
salaries grow by 5%. Tax roundness measure may be treated as measure of tax progressiveness.
Ibidem, p. 17.
Ibidem, p. 33.
Gentry W.M., Hubbard R.G., Tax Policy and Entry into Entrepreneurship. Mimeograph, Columbia University, July 2002.
Ibidem, p. 34.
Mirosław Cienkowski, Tomasz Wołowiec
35
In 2002 J.B. Cullen and R.H. Gordon published a book on the influence of tax
system on taking up entrepreneurial activity9. Apart from theoretical considerations,
the book provided empirical data confirming the authors’ theses with reference to
the United States using the income statistics from 1993. Using the model based on
regression equations the authors reached surprising conclusions. “Contrary to
common thinking, we believe that PIT cuts reduce entrepreneurial activity. Such tax
reduction limits tax savings resulting from deducting interest rate of business loans
while profits remain largely taxed on the level of CIT rates 10. As a result people are
discouraged from risk taking. Moreover, as emphasized by Domar and Musgrave 11,
lower PIT rates lead to lower government participation in business market, which
accounts for the fact that taking up economic activity on one’s own is less attractive
for people who are unwilling to take risks. Potential tax savings result from movement to business only in order to re-classify income from personal to corporate one
for tax purposes; they also decline along with declining PIT rates. Tax effects may
be huge. For example, we estimate that the introduction of a 20% flat tax would
triple the share of self-employed people in economy 12. In practice, a 20% PIT
would increase average effective tax in the USA as well as entrepreneurial activity.
These figures prove that there are opposing views among economists as to the
influence of taxes on entrepreneurship.
An important factor deciding on the height of optimal tax rates is labor supply
and its indirect measure – taxable income. The issue of the power of labor supply
reaction and, what is connected with it, taxable income, on changes to tax rates is
a key dilemma in the theory of optimal taxation. Full measure of taxable income
flexibility level was performed in American conditions by J. Gruber and E. Saez 13.
Their research covered the period of 1980s, when significant reductions of federal
9
10
11
12
13
Cullen J.B., Gordon R.H., Taxes and Entrepreneurial Activity: Theory and Evidence for the U.S. National
Bureau of Economic Research, Working Paper 9015, Cambridge, June 2002.
In the USA in 2000, SMEs could use the 15% CIT rate while extreme PIT rates ranged from 39% to
48%, depending on the state.
Domar E.D., Musgrave R.A., Proportional Income Taxation and Risk Taking, “Quarterly Journal of
Economics”, 58, 1944, pp. 388-422, quoted after: Cullen J.B. …, quoted edition, p. 36.
Ibidem, p. 36
Gruber J., Saez E., The Elasticity of Taxable Income: Evidence and Implications, National Bureau of Economic Research, Working Paper No 7512, Cambridge, January 2000.
36
Market reactions of entities to income tax and managerial decisions
and state taxes were implemented. They used a full panel of observations covering
data from 46000 tax return forms from 1979-199014. The research showed that
flexibility of taxable income grows along with income growth. So taxpayers from
higher tax ranges strongly react to increasing and decreasing tax rates. General flexibility of taxable income to tax rate changes is mostly determined by taxpayers from
the highest income group. The calculated flexibility ratios for the USA equaled
0.180 for the group of income ranging from 10 to 50 thousand USD; 0.106 for the
group with income from 50 to 100 thousand USD and 0.567 for the group with
income above 100 thousand USD15. These results indicated also that in American
reality, in the years 1979-1990, as a result of implementing the tax reduction program, especially for the highest income group, the growth of taxable income in the
highest income groups caused major growth of global taxable income.
A. Goolsbee in his book What Happens When You Tax the Rich? Evidence from
Executive Compensation16 deals with a fascinating issue of the reaction of taxable income of stock exchange companies boards on changes to extreme tax rates in personal income tax. The author used the data submitted by companies as required by
security regulations in the USA: stock exchange companies have to inform about
remuneration of five most important representatives of the board. The survey
covered the period of 1991-1995. The obtained results indicated that taxable income strongly reacts to increasing the extreme tax rate 17 in a short time. A. Goolsbee calculated short-term flexibility for three income groups and obtained the value
of 0.39 for the lowest group, 0.81 for the middle group and as much as 2.21 for the
highest group18. As the author noticed “Nearly all the reaction stemmed from
14
15
16
17
18
Ibidem, p. 7.
Ibidem, p. 21, table 8.
Goolsbee A., What Happens When You Tax the Rich? Evidence from Executive Compensation, National
Bureau of Economic Research, Working Paper 6333, Cambridge, December 1997.
In 1993 (after Clinton was elected US President) US Congress passed the increase of PIT from 31%
to 36% for incomes between 140,000 and 250,000 USD and from 31% to 39.6% for incomes above
250,000 USD. Moreover, upper limit of Medicare contribution was abolished in 1994, which increased the extreme tax rate for people earning over 140,000 USD by further 2.9%. It should be added that federal taxes are not the only income taxes in the USA, as such taxes can be imposed by states
and local self-governments.
Ibidem, p. 21. These groups had incomes: lower group 275-500 thousand $, middle 500 thousand – 1
M $, and upper above 1 M $.
Mirosław Cienkowski, Tomasz Wołowiec
37
changes in realization of capital options by boards with the highest incomes. It must
be admitted that boards without capital options, even if they had high incomes,
reacted very weakly to changes to extreme tax rates. Other forms of taxable income,
such as remuneration and bonuses did not react in the short term” 19. However, as
A. Goolsbee proved, changes to taxable income of stock exchange companies
boards did not stem from tax behavior of these boards but from changing the time
of paying these benefits.
G.D. Myles20 made a review of growth models from the perspective of the influence taxation has on economic growth. He proved that in theoretical models we can
isolate a series of channels through which taxation may influence growth and that
this influence can be significant. “Some models predict that the growth effect is
minor, other predict that it could be major. What differentiates these models is the
number of key parameters, especially physical capital share in generating human
capital, flexibility of usefulness function and depreciation rate. In principle, these
figures could be isolated empirically and the size of growth effect precisely determined. However, in order to do so, one would have to make a review of a series of
fundamental issues concerning model assumptions. Moreover, we would not be
able to provide an answer without taking into account empirical evidence. Tax rates
grew in most countries in the past century, which should be a sufficient proof for
determining the current influence”.21 As G.D. Myles concludes “a thorough review
show that theoretical models take into account a series of issues that need to be
considered, but do not present any convincing or definite answers” 22.
On the other hand, as shown by Mendoza, Milesi-Ferrati and Asea23 in their
models of regression, relation between taxation and economic growth rate is small.
Contrary evidence was supplied by Leibfritz, Thronton and Bibbee 24. They calculat19
20
21
22
23
24
Ibidem, p. 3.
Myles G.D., Taxation and Economic Growth, Institute for Fiscal Studies and University of Exeter,
September 1999.
Ibidem, p. 11.
Ibidem, p. 21.
Mendoza E., Milesi-Ferrati G.M., Asea P., On the effectiveness of tax policy in altering long-run growth: Harberger’s super neutrality conjecture. “Journal of Public Economics”, 63/1997, 119-140.
Leibfritz W., Throinton J., Bibbee A., Taxation and Economic Performance, OECD Working Paper No.
176, Paris 1997.
38
Market reactions of entities to income tax and managerial decisions
ed that in OECD countries in 1980-1995, the growth of tax rate by 10% was accompanied by the decline of economic growth rate by 0.5%, with direct taxation
limiting this growth more than indirect taxation 25. The quoted research provides
one clear conclusion. Economists cannot unequivocally determine the influence of
taxation on economic growth rate in the long term. The proofs that taxation considerably influences growth rate are weak. Such conclusion may be shocking,
but on the basis of current results of economic research we cannot make any other
conclusion.
Income taxes versus managerial decisions
Taking rational decisions in a company, both current and strategic ones, requires
knowing and taking into consideration external conditions of conducted activity.
The accuracy of decisions made, as well as ability to adjust to changing external
environment determines not only the effectiveness of the enterprise’s operations,
but also its ability to conduct further activity. In a proper business environment, the
significance of feedback consists in adjusting reaction to received information on
effects of actions. In company behavior, the effectiveness of feedback as a method
of modifying behavior aimed at improving effectiveness and efficiency of undertaken actions depends on meeting some basic requirements 26: precision (objectivity) of
information, directness – if feedback happens just after the event, its recipient realizes the relation between the attitude and result, and completeness – consisting in
the possibility of taking into account all important relations. The company existence
in the long run depends on activities adjusting it to changing environment. Adaptation activities taking place both inside the company and in all its contacts with the
environment, can also be forced by fiscal policy of the economy 27.
Tax system significantly influences material and legal situation of households
(through the level and nature of fiscal burden and taxation on structure) and economic entities (being a cost element for companies and their owners). Those running business entities must take tax regulations into account in their decision-taking
25
26
27
Myles G.D., quoted edition, p.18.
See: Penc J., Leksykon biznesu, Placet, Warszawa 2002, p. 411.
Compare: Skonieczny J., Działania adaptacyjne przedsiębiorstwa, „Przegląd Organizacji”, No 6/2001.
Mirosław Cienkowski, Tomasz Wołowiec
39
processes. Remembering that in market economy the profit motive is a fundamental
premise for economic development, tax legislators must be aware that only a part of
gross domestic product may be (is) taken over by taxes without causing any negative
financial or economic effects. Creators of tax system should take into consideration
the fact that each tax burden is treated by entities as lowering their current and
future wealth status. If there are high tax rates in the tax system, we can expect such
effects as: weakened economic growth rate, development of ‘grey zone’ economy,
capital flow abroad and simultaneously limited inflow of capital from outside. Legal
regulations providing frameworks for operations of economic entities and taxation
of income and capital owned by households significantly influence market forces,
consumption and investment expenses, development of enterprises and economic
growth.
With reference to companies we can distinguish three elementary economic
effects of taxation: those regarding liquidity, assets and organization. Personal and
corporate income taxes mainly negatively influence entrepreneurs’ liquidity, as they
lead to definite burden placed on the entrepreneur (taxpayer) 28. Both personal29 and
corporate income taxes are ‘expenses’ which are not costs of obtaining revenue and
they lower company liquidity. Company liquidity is affected by the way of determining tax base alone. If taxable revenues from conducted economic activity are due
28
29
Indirect taxes (especially VAT) offer the possibility of passing the tax burden on the consumer,
therefore it is hard to formulate a clear opinion on negative influence of this taxation form on entrepreneurs’ liquidity. For example imposing VAT on paid provision of advisory services negatively affects the entity’s liquidity if the payment of the fees by the client is performed after the day when tax
obligation consisting in submitting a tax return form for a particular period originates. If the service
and payment are made on the same day, then this positively influences financial liquidity until the day
of settling VAT taxes with Tax Office.
In case of definite burden placed on the taxpayer (entrepreneur) through personal income tax, we
must jointly take into account the tax and various contributions for social purposes, obtaining a list of
complementary incomes placing burden on work (tax wedge – labor costs). The use of the term “labor costs – tax wedge – labor taxation” is justified for two reasons. Firstly, we should remember that
in some countries we have various forms of financing social allowances, both on the basis of general
taxes (budget financing) and in the form of contributions, based on social insurance funds (nonbudget financing). In most countries tax wedge imposed on work and related to the total employment costs covered by the employer is nearly flat. This is due to the fact that progressive tax scale
(progressive tax scale may be a 1% scale, but reflecting various tax preferences, including tax-free
amounts and differentiated costs of obtaining revenue) was combined with digressive contributions.
Regardless of size of work, taxes and contributions totally constitute a similar surcharge (calculated as
percentage).
40
Market reactions of entities to income tax and managerial decisions
revenues, even if they have not been obtained yet, payments received for deliveries
of goods and services to be performed in the next tax years do not constitute taxable revenue in a year in which they have been obtained. This means that usually
revenues and costs are determined on the basis, of the accrual method. The appearance of dues from, for example sales on installment basis, leads to appearance of
revenue on the day the invoice was drawn, not later than on the last day of the
month in which the goods were delivered. The appearance of due revenue leads to
the origin of tax obligation, usually in the form of down-payments during the tax
year, even though the taxpayer has not received the payment yet. With reference to
revenues from interests, exchange rate differences determined on tax principles and
compensations and contractual penalties, the legislator usually adopts the cash rule
of revenue origin. This means that the revenue and the obligation to pay tax appear
at the moment of receiving money. Also personal tax returns do not lead to improved liquidity, as tax return (inflow) is preceded by too high liquidity of tax (expense), which causes negative effects in liquidity. Company liquidity is also affected
by the way of calculating irrecoverable claims in costs of obtaining revenue. If these
claims are tax cost only at the moment of obtaining a confirmation (decision) that
they are irrecoverable, issued by the enforcement organ, or a court decision to reject
the motion for bankruptcy or for discontinuing bankruptcy proceedings covering
liquidation of assets. Taking into account the fact that the process of documenting
irrecoverable claims may last several months, this may generate negative interest
effect, resulting from the length of time between the day of paying tax on due revenue and the day of accepting the claim as tax costs and lowering the size of tax
burden. Also the process of making the claim causes some additional (non-tax)
payments (expenses on the proceedings, enforcement and others) 30.
On the other hand, an entrepreneur has depreciation write-offs at their ‘disposal’,
that is tax costs affecting lower tax base, which are not tax expenses. Taxpayers may
make depreciation write-offs on fixed assets and intangible assets following allowed
30
See more in: Kudert S., Jamroży M., Optymalizacja podatkowania dochodów przedsiębiorców. ABC Wolters
Kluwer bussiness, Warszawa 2007; Sokołowski J., Zarządzanie przez podatki, PWN, Warszawa 1995;
Hundsdoerfer J., Jamroży M., Wpływ podatków na decyzje inwestycyjne przedsiębiorstwa, „Przegląd
Podatkowy”, No 11/1999.
Mirosław Cienkowski, Tomasz Wołowiec
41
methods and depreciation rates. Postponing tax payments is possible through: using
the digressive method, one-off depreciation write-offs, increasing depreciation rates,
determining individual depreciation rates and choosing the method of valuation for
homogenous, material elements of current assets (FIFO, LIFO, weighted average).
In many legislations reserves and updating write-offs are treated as tax costs which
do not cause tax appearance31.
The size of tax expenses is also affected by activities related to balance sheet
events32. Transferring or increasing tax costs takes place within the possibilities
offered to the taxpayer in form of the right to choose or decide, for instance what
method of fixed assets depreciation to choose. The taxpayer may also have some
freedom in determining the costs of generating fixed assets, depending on the
adopted method of cost calculation. Restructuring activities in an enterprise also
influence liquidity in the area of income taxation. The selling of an enterprise generates disclosure of quiet reserves included in the assets of the sold enterprise and
growth of company value, which is translated into taxation of income generated as a
result of the sale. Taxation of quiet reserves may be a factor limiting such transactions (the so-called asset deal). It is possible to avoid paying taxes on the day of selling the company by contributing the company as monetary contribution, which
postpones taxation until the shares obtained in return for contribution in kind are
sold33. Reliefs of this type can be divided into: facilities in payment which do not
31
32
33
In many OECD countries an interesting instrument is the creation of reserves on retirement benefits.
If the partner has an employment contract with the company, then, within the remuneration, the
company may also grant an employee retirement benefits paid out (with interest) after the employment relationship is terminated. If tax law treats reserves for retirement benefits as tax costs, there is
an effect of postponing taxation. The company increases reserves, showing costs of obtaining revenue, while the recipient of retirement benefit taxes it only at the moment of receiving it. So retirement benefit may be greatly financed from tax savings of a capital company (assuming that the period
of employment is long enough).
Profit is a category of balance sheet law and is a result item in the profit and loss account of
a company, shown on the basis of accounting books. Income is a category of tax law, constituting a
surplus of the sum of revenues over the costs of obtaining them. The differences between balance
sheet profit and taxable income result from the fact that the goal of balance sheet regulations is not
to allow to demonstrate too high profits in comparison with actual profit, which is achieved through
boundaries of balance sheet valuation and the principle of cautious valuation. The aim of tax regulations is not to allow lower taxation base or threat to regularity in collecting budget incomes.
Revealing quiet reserves in asset elements takes place, for example: as a result of company division, if
the acquired assets are not an organized part of a company.
42
Market reactions of entities to income tax and managerial decisions
lower the amount of paid tax, decreasing the amount of paid tax and exemptions
from taxpayment.
This can be illustrated with the following example showing the influence of taxation and transfer of tax payments on maintaining liquidity.
An individual entrepreneur is going to purchase household appliances worth PLN 40,000 at
the end of the year. In the four-year planning period we expect annual positive cash flows from
economic activity (before taxation) in the amount of PLN 100,000. Surplus of financial means
can be put in the enterprise at the return rate of 10%. In simplified form, cash flows, decreased by
the depreciation write-offs (of the appliances bought last December), calculated at the linear method
(25% x PLN 120,000 equal taxable income. Income is subject to 19% taxation while consumption expenses of the entrepreneur – taxpayer (in private sphere) amount to PLN 50,000 annually.
Financing investment expense is not possible – assuming that expenses (including tax payments) are made at the end of the year.
Year
Cash flow CF
Depreciation D
Interest revenue IP
Taxable income TP
Income tax T = 19%
Consumption expenses CE
Deficit: CF – T – CE
1
100 000 zloty
30 000 zloty
0
70 000 zloty
13 300 zloty
90 000 zloty
- 3 300 zloty
The taxpayer – entrepreneur does not have sufficient means for financing the
purchase of household appliances in the first year. Since external financing (such as
bank loan) is out of the question due to costs of obtaining it, the taxpayer may use
internal financing through the policy of showing incomes. The taxpayer chooses the
digressive method of making depreciation write-offs, calculated on general principles (increasing depreciation by 2.0 ratio). Tax costs are increased in the first year
and decreased in the fourth year.
43
Mirosław Cienkowski, Tomasz Wołowiec
Year
1
2
3
4
Cash flow CF
100 000
100 000
100 000
100 000
Depreciation D
(60 000)
(30 000)
(30 000)
(0)
Interest revenues IP
0
240
3 689
3 969
Taxable income TP
40 000
70 240
73 689
103 969
7 600
13 343
14 001
19 754
expenses
90 000
50 000
50 000
50 000
Surplus: CF + IP – T - CE
2 400
36 894
39 699
34 215
Income tax T = 19%
Consumption
CE
Transferring tax payments in time as a result of increasing depreciation costs in
the first year allows to preserve sufficient financial liquidity. Another way of showing incomes may be earlier documentation of unrecoverable claims or exemption
from debt (the so-called policy of showing income), as well as an attempt at signing
investment expense to the company assets.
The policy of showing income
The policy of showing income (in case of residents) allows to move in time taxable incomes in order to minimize discounted value of income tax, due to the
periodical nature of tax payments. We should assume that there are no relations
between paid income taxes and other non-tax cash flows34. Within the policy of
showing income we can discern activities aimed at shaping the actual state and its
interpretation. Shaping the actual state, an entrepreneur may take up actions leading
to appearance of some future events, thus changing the actual state circumstances.
Within the interpretation of the actual state, activities may concern the right to
present past factual states in the balance account and at the same time they may
provoke different tax effects. The effect of the policy of showing income is the
34
In case optimization (decreasing) of paid income taxes may influence changes of other – non-tax –
cash flows (for example size of net revenue from hotel services sale), the goal of minimizing discounted value of tax payments is not always balanced with maximization of current net value. So
limiting only to minimization of income taxation could lead to resignation from generating incomes.
44
Market reactions of entities to income tax and managerial decisions
implementation of the process of moving incomes (paid income tax) in time, which
may result in the tax rate effect, interest effect or progression effect. Tax rate effect
is the consequence of changes to tax rates or scales. For example, if the rate(s) of
personal income tax is supposed to (may) be lowered next tax year, it is rational to
move some (all) incomes to the next tax year. Interest effects depend on the applied
means within the policy of showing income. In a situation when incomes are moved
due to interpretation of actual state, there are differences in tax burden, leading to
temporary tax savings. Tax savings may be put on a deposit account generating tax
interest effect. In case of moving incomes in shaping the actual state effect, there
might also be differences in tax burden, leading to temporary tax savings. Generated
savings may also be put on a bank deposit account and generate the tax interest
effect. Moreover, regardless of the tax aspect, there might be non-tax interest effect
visible.
So, if the taxpayer arranges delivery of goods in the new tax year rather than in
the current one, the payment for goods will be postponed by one month and showing particular income will be postponed by a year (assuming that the taxpayer uses
the down-payment form of settling taxes). Such behavior shapes two contradictory
effects. On the one hand, there is a delay of income tax payment for a year, and
taking into account particular tax rate(s) and market interest rate, we experience tax
interest effect – the discounted value of tax payment is decreased. On the other
hand, postponing payment for goods results in appearance of negative non-tax
interest effect in shape of decreased current net value before taxation35.
With moved incomes, progression effect will only appear in case of progressive
tax scales used in constructing income taxes. With the implementation of the policy
of showing income using the means of interpretation of actual state, only tax interest effect will be visible. As discounted value of tax payments decreases as we move
forward the payment of tax, the taxpayer should aim at delaying the moment of
showing the whole (part) of taxable income. Comparing discounted tax rates for
particular periods, we should break down (dispose of) income so that it is taxed in
35
In a situation when negative non-tax interest effect exceeds tax interest effect (taking into account
current net value before taxation) the taxpayer should not postpone the date of goods delivery.
45
Mirosław Cienkowski, Tomasz Wołowiec
periods with the lowest discounted tax rate. Using the shaping of actual state we
achieve the same effect (with proportional rates), the only difference being that
apart from tax interest effect, there will also be non-tax interest effect.
The policy of showing incomes in progressive tax scale makes it necessary to
take into account, apart from interest effect, also progression effect. The strategy
choice must be preceded with the analysis of type and course of progression scale,
reflecting the so-called “bumps” at the end of particular range, which is shown in
the figure 1 and 2.
Fig. 1. Example of personal income tax scale, assuming four tax rates and
taxation base expressed in euro (calculations of tax burden, average and
Taxburden
30000
Marginal rate
Stawka krańcowa
30%
20000
Marginalra-
15000
te
15%
Taxburden
Stawka przeciętna
Obciążenie podatkowe
5000
Average rate
0
10000
Stawka krańcowa, przeciętna
45%
Obciążenie podatkowe
Marginal and average rate
extreme rates – hypothetical)
0%
50000
Podstawa opodatkowania w euro
80000
100000
Taxbase in euro
Source: own elaboration.
In implementing the policy of showing income with gradual progression, we
should consider the same strategy which is optimal with proportional rates, but in
each analyzed period we should take into account numerous (discounted) extreme
rates36. Taking managerial decisions, the taxpayer should first move income to the
36
Extreme tax rate (known as border rate) can be written down as d Tpdof [I] : d I. If the taxpayer
wants to know the proper (actual) tax rate applicable to additional income growth, they must establish the extreme (border) tax rate. The derivative of the function of the rate(s) to tax base (as a varia-
46
Market reactions of entities to income tax and managerial decisions
period with the lowest discounted tax rate and then to the period with the next
lowest discounted tax rate, and so on. If the taxable income movements are realized
not as a result of the means of interpretation of the actual state, but as a result of
shaping the actual state, then the taxpayer must consider non-tax interest effect.
The activity consists then in maximizing the difference between discounted (beneficial) tax effect and discounted (detrimental) tax effect.
Fig. 2. Example of continuous progression in personal income tax, assuming
four tax rates and tax base expressed in euro (calculations of tax burden,
Marginalrate
Marinalrate
45%
Stawka krańcowa
35%
20000
Obciążenie podatkowe
15%
Stawka przeciętna
Marginal and average rate
50%
40000
Stawka krańcowa, przeciętna
Podatek dochodowy od osób fizycznych w tyś. euro
Personal incometax in euro
average and extreme rates – hypothetical).
Taxburden
5000
0%
0
20000
40000
60000
80000
100000
Podstawa opodatkowania w euro
Taxbase in euro
Source: own elaboration.
ble) is extreme function of tax scale. Average (real) tax rate is the quotient of tax obligation (calculated at relevant rates) and tax base: Tpdof [I] : I.
Mirosław Cienkowski, Tomasz Wołowiec
47
The graph clearly shows how the progression effect works, while taking into the
analysis the tax interest effect leads to the conclusion that taxable incomes should
not be distributed equally into particular periods, but they should be showed in the
first years in a slightly lower amount, and then increasingly – in the consecutive
years. The optimum will be reached when discounted extreme rates are equally high
in each period. They can be calculated using the system of linear equations.
Taxation also affects the profitability of a particular method or structure of
financing the company. Due to the fact that particular forms of financing are treated differently as far as taxes are concerned, we should take into account tax effects
of financial decisions we take. From the point of view of managerial decisions,
income tax burden should reflect:

The method of taxing the remuneration of a partner in a capital partnership
(it is more beneficial from the tax point of view to pay interests on a loan
than the dividend). In case of a partner which is a capital partnership, taxation is neutral for tax decisions, assuming that there are no limits due to
“thin capitalization”).

The method of taxing the remuneration of a partner in a personal partnership. From the tax perspective it is more beneficial to pay remuneration in
form of shares in profit instead of interest on loan. Financing from borrowed capital coming from a partner is disadvantageous for financing from
own capital, as there is no legal possibility of deducting interest when establishing the income of a partner-lender (regardless of whether the partner is
an individual or a legal entity).

Income taxes affect company financial liquidity, which is evidenced in the
comparison of the possibility of preserving continuity of financial liquidity
by delaying in time tax payment, using principles of line and digressive depreciation.

Essential elements of the policy of showing incomes are: tax rate effect, tax
interest effect, non-tax interest effect and progression effect.

Depending on the course of tax scale, it is desirable to implement two different strategies within the policy of showing incomes. When using the
48
Market reactions of entities to income tax and managerial decisions
means of actual state interpretation, the goal may be to minimize discounted
value of tax payments, while using the means of shaping the actual state, the
goal is maximization of NPV after taxation.

Analyzing progressive tax rates (continuous progression), it is important to
seek equality of discounted extreme rates in all analyzed periods. With reference to proportional rates and graded progression, it is vital to compare discounted extreme rates in particular periods and to move incomes to the periods (or time ranges) with the lowest discounted extreme rates.

Obviously, with graded progression (contrary to continuous progression),
we might not have the optimal discounted extreme rate, and optimization
criteria may not be applicable in form of leveling discounted extreme tax
rates.

Taking managerial decisions we should be aware that in income tax, putting
incomes forward to future years cannot always be optimal due to both progression effect in progressive scales and non-tax interest effect in proportional scales.
Non-resident taxpayer and the policy of showing incomes
Taking managerial decisions, it is important to assess the applicability of the presented methods to the analysis of the policy of showing incomes in EU countries by
tax non-residents. If an individual has unlimited tax obligation in country A and
additionally obtains income in country B (country of residence) as well as in country
A (source country), incomes obtained in B (in accordance with the agreement to
avoid double taxation) are excluded from taxation in A, preserving the effect of tax
progression. The foundations of the analysis cover the period of two tax years (Y1
and Y2). Incomes obtained in country B (A1 x I + A2 x I) are taxed with income
tax, applying methods of exclusion in country A, while incomes obtained in country
A are taxed with income tax in accordance with the rules applied in this country.
The taxpayer should try to minimize discounted value of tax payments in the period
of two tax years by optimal breakdown of income (I) into its sources located in two
countries (A and B) and into two periods:
Mirosław Cienkowski, Tomasz Wołowiec
49
I = I (Y1) + I (Y2) = (A1 + A2 + B1 + B2) x I
Optimization criteria:  (1): discounted value of tax payments = ∑ (PITB + PITA x 1 /(1 + r) = min. Assumptions: (1) (A1 + A2 + B1 + B2) = 1; (2) (A1, A2,
B1, B2)  0;
(3) constant tax rates and interest rate in the period of two analyzed years; (4)
comparable principles of determining income tax in A and B countries (see: fig. 3, 4,
5); (5) complete divisibility of income tax (I) into settling periods and both countries, and (6) not taking into account other additions to income taxes in both countries (such as crisis, solidarity, church additions, etc).
1. Assuming one settlement period and assuming there is no progression (no
progression effect) with reference to the exclusion method in A, total income (I)
should be divided into income obtained in country A and country B in the way that
minimizes the amount of tax obligation. Thus the optimization criterion can be
written down as:
 (2): PIT = PITB [A1 I] + PITA [B1 I] = min., assuming that (A1 + B1) = 1,
so:
 (2): PIT = PITB [A1 I] + PITA [ (1 - A1) x I] = min.
Thus the share of income from sources located in B should be increased (decreased) until the extreme tax rate for the income obtained in B is lower (higher)
than extreme tax rate used for the income obtained in A. The extreme unit of income should be shown in a country with lower border tax rate, no matter whether it
is proportional, continuously progressive or graded progressive. Making managerial
decisions, we should show incomes in country A until the extreme tax rate in A
levels the tax rate in B, and the remaining part of income should be taxed in B.
2. The assumption that the exclusion method works with the progression effect.
Progression effect in income taxation accounts for the fact that tax rate related to
taxable incomes in the taxpayer’s residence state is determined with reference to
joint income of a taxpayer, including exempted foreign incomes. The average tax
rate in residence state A is fixed for 0 < B1 I <I, as it is calculated for given total
income I.
50
Market reactions of entities to income tax and managerial decisions
Fig. 3 Example function of extreme tax rate in country A and B
Marginal rate in %
Stawka krańcowa w %
45 %
40%
Stawka krańcowa w kraju A
30%
Marginal rate in country A
20%
Stawka krańcowa w kraju B
10%
Marginal rate in country B
0%
0
10000
20000
40000
60000
80000
100000
Podstawa opodatkowania w euro
Taxbase in euro
Source: own elaboration.
Taking into account progression, the optimization criterion can be determined
as:
 (3): PIT = PITB [A1 I] + (PITA [I] / I) x B1 A1 = min, assuming that: (A1 +
B1) = 1
From the optimization criterion formulated in this way we can draw a conclusion
that the share in income in country B should be increased (decreased) until the
extreme tax rate for the income generated in B is lower (higher) than average tax
rate in B, calculated for total income. Analyzing an optimal situation, we do not
compare extreme tax rates, but extreme rate for country B and average rate for
country A. An optimal way of showing incomes is shown by mixed average function and extreme tax function in residence state A.
51
Mirosław Cienkowski, Tomasz Wołowiec
Fig. 4. Mixed function of extreme and average tax rates in countries A and B
Stawka krańcowa, przeciętna w %
45 %
Marginal and average tax rate
40%
Stawka krańcowa w kraju A
30%
Marginal rate in country A
Stawka krańcowa w kraju B
20%
Marginal rate in country B
10%
Mieszana krańcowa i przeciętna stawka podatkowa
0%
0
10000
20000
40000
60000
80000
100000
Podstawa opodatkowania w euro
Taxbase in euro
Source: own elaboration.
Analyzing the above figure we can notice two areas. In the first area the tax burden on income (I) in country A is lower than it is shown in B, or in both countries.
The second area shows that in case of total income, it is optimal to obtain it from
sources located in B. So, when analyzing one period (for one settlement period),
and taking into account the progression effect, we can state that as long as the average tax rate for total income in residence state in lower than extreme tax rate in B,
income should be shown in country A. In other situations the whole income should
be demonstrated in B. From mixed function of average ad extreme tax rate we can
derive the mixed function of extreme tax rates.
52
Market reactions of entities to income tax and managerial decisions
Fig. 5. Mixed function of extreme tax rates in countries A and B
Stawka krańcowa, przeciętna w %
45 %
40% %
Marginal and average tax rate in
Stawka krańcowa w kraju A
Marginal rate in country BMarginal rate in country A
30%
Stawka krańcowa w
kraju B
Stawka przeciętna w kraju A
20%
Average rate in country A
10%
Mieszana krańcowa stawka podatkowa
Mixed marginal and average tax rate
0%
0
10000
20000
40000
60000
80000
100000
Podstawa opodatkowania w euro
Taxbase in euro
Source: own elaboration.
Although the figure shows that the extreme tax rate in A in the range 10,000 –
26,000 euro is higher than extreme tax rate in B, the incomes from this hypothetical
range should be shown only in country A, due to the fact that benefits resulting
from lower (in B) extreme tax rate are offset by progression effect, that is application of average tax rate determined for total incomes to previous incomes from
sources in country A. In the process of making managerial decisions, the mixed
function of extreme tax rates allows us to optimize income taxation regarding two
countries and two planning periods.
In the analysis of two periods (two-period analysis of decision optimization
double problem) at the first stage of planning we assume that there are no limits to
division of total income (I) between source country B and residence country (A) or
between tax years Y1 and Y2). Two effects result from such a model, namely:
Mirosław Cienkowski, Tomasz Wołowiec
53
(1) there is a possibility of neutralizing progression effect by appropriate division
of total income. Such division takes place in a situation when in a given settlement
period there are no home and foreign incomes appearing simultaneously.
(2) it is necessary to take into account interest effect in the analysis, which requires extension of optimization criterion:
 (4): discounted value of tax payments = ∑{ PITB [Ax I] +PITA [(Ax + Bx) I]
/ [(Ax + Bx) I] x Bx I} x 1 /(1 + r ) = min, assuming that: (A1 + A2 + B1 + B2)
= 1.
The problem of optimization (double one) consists in the fact that when making
managerial decisions, we should determine optimal division of incomes obtained in
country B, understood as shifting incomes forward. At the same time it is essential
to make a time-optimal division of incomes in residence state A by leveling discounted extreme rates. We must also take into account interrelations resulting from
retaining progression effect. When taking decisions we can indicate the general
concept of solving the problem of which country to choose to show income (territorial problem) and to consider the analyzed periods, using one of two approaches:
1. Firstly, for each settlement period, we should determine the mixed function of
extreme tax rates, presenting an optimal way of showing your income in this period.
2. Secondly, we should level discounted extreme tax rates of both functions.
Assuming limitations to income division, the problem of optimizing managerial
decisions could be solved slightly differently.
The above figure 6 i 7. shows a graphic solution to the problem of optimization.
The function presented with a broken line is the mixed function of extreme tax
rates in tax year 1 (first period Y1). The function presented in black is discounted
mixed function of extreme tax rates in tax year 2 (second period Y2). For the income I = 200,000 euro there are two local extreme values: EX1 and EX2. In EX1
total size of incomes was shifted to country B in order to show it in the second
period (tax year).
54
Market reactions of entities to income tax and managerial decisions
Fig. 6. Hypothetical example of the analysis for the total income of 200,000
euro
Stawka krańcowa %
45 %
%
Marginal tax rate in
40%
18%
30%
20%
20%
15%
10%
Taxbase in euro
0%
0
20000
40000
60000
100000
150000
in euro w euro
PodstawaTaxbase
opodatkowania
zdyskontowana mieszana
mieszana
stawka krańcowa
stawka krańcowa
Discounted mixed marginal tax rate
200000
Mixed marinal tax rate
Source: own elaboration.
Discounted extreme tax rate is 0.18 x 1 / (1 + r). In the first tax year in A incomes will be demonstrated in the amount which brings such amount of income so
that extreme tax rate of the final income unit was also 0,18 x 1 / (1 + r). Regarding
EX2, leveled discounted extreme tax rates amount to 0.18, as total income was
moved to country B, in order to show it for taxation in the first period – tax year,
and the remaining part to be shown in country A in the second tax period to the
extent in which the final income unit reaches extreme tax rate of 18%. Of course,
EX2 must be less beneficial than EX1.
The above example was based on a specific case, assuming that income I =
200,000 euro. A question arises whether the analysis could be generalized (made
more abstract). If income (I) is higher, the graph would stretch ‘horizontally’, but
EX1 and EX2 would not change. If income was lower, the graph would shrink,
showing another – the third – extreme value (EX3), which is show in figure 8.
55
Mirosław Cienkowski, Tomasz Wołowiec
Fig. 7. Hypothetical example of the analysis for total income of 100,00 euro
Stawka krańcowa %
Marginal tax rate in %
45 %
40%
18%
30%
20%
20%
15%
10%
0%
0
40000
80000
100000
120000
150000
200000
Podstawa
opodatkowania
w euro
Tax base
in euro
zdyskontowana mieszana
mieszana
stawka krańcowa
stawka krańcowa
Discounted mixed marginal tax rate
Mixed marginal tax rate
Source: own elaboration
The third extreme value EX3 for total income denotes that income is shown only in country A. Analyzing the mixed function of extreme tax rates for the first tax
year, we should notice that showing income in country A will be more beneficial –
even though extreme tax rate is higher than in B – due to lower average tax rate,
being the effect of progression. EX3 plays a vital role in relatively low income I <
[58,600 euro x (1 + 1 / (1 + r)]. With high (very high) income, the optimum described by EX1 dominates. In a situation of a taxpayer with limited tax obligation in
B, assuming two periods (two years) of the analysis and two countries )A and B), we
can formulate the following conclusions:
1. EX2 (optimum) must be less beneficial than optimum EX1.
2. In case of obtaining high total income, the most effective optimum (from the
tax point of view) is optimum (extreme) EX1.
56
Market reactions of entities to income tax and managerial decisions
3. With relatively low income, extreme (optimum) EX3 is the best.
Specifying the original assumption that income can be freely divided into two tax
years (planning periods) and two countries, we must adopt time limit, thus adjusting
the optimization criterion to reality:
 (4): discounted value of tax payments = ∑{ PITB [Ax I] +PITA [(Ax + Bx) I]
/ [(Ax + Bx) I] x Bx I} x 1 /(1 + r ) = min, assuming that: (A1 + B1) > 0 ^ (A2 +
B2) > 0
If in each tax year the taxpayer is obliged to show some minimum income, then
limitations should be presented as the so-called forbidden areas, assuming: (A1 +
B1) > 0,4 ^ (A2 + B2) > 0,8. Such assumption means that established optima (extreme points EX1, EX2 and EX3) can be found in forbidden area, and then the
best acceptable solution will be the boundary solution.
Fig. 8. Solution at time limitation for showing income
Stawka krańcowa %
45 %
40 %
30 %
20 %
20 %
15 %
10 %
0%
0
40000
120000
160000
Tax base
in euro
Podstawa
opodatkowania
w euro
zdyskontowana mieszana
mieszana
stawka krańcowa
stawka krańcowa
Discounted mixed marginal tax rate
200000
Mixed marginal tax rate
Source: own elaboration.
The example shows that extreme points (optima) EX1 and EX2 are in forbidden
areas, so boundary values are acceptable minimums. Thus sucha situation allows to
take managerial decisions consisting in moving total disposable income in the
amount of 80,000 euro to the second tax year to be shown in country B. For the
right boundary value, incomes – in full – should be moved to the first tax year to be
Mirosław Cienkowski, Tomasz Wołowiec
57
shown in country B. Moreover, the second boundary value, due to the interest
effect, is always less beneficial than the first one. Therefore the optimal breakdown
of taxable income will consist in showing 40,000 euro in the first tax year in country
A, and 160,000 euro in the second year in country B.
Using the method of inclusion with regard to non-resident’s income from
sources located in country B, we should remember that in a situation when average
tax rate in B is lower than the rate in A, the tax calculated in B will be included in
total towards the payment of tax in A, which means that the taxpayer will be burdened with the whole tax in country A. If the average rate is lower than the rate in
A, then in this country limited inclusion of tax paid in B will take place, thus in A
we will not have tax obligation concerning income from sources located in B. So, in
a situation when average tax rate in taxpayer’s residence state (B) is lower than the
rate in the state of income source (A), the whole income should be shown in the
taxpayer’s residence state. Otherwise, the choice of the state does not have any
significance for the decision, as in both cases the income will be taxed with a higher
tax rate from the residence state.
Income taxes also negatively affect entrepreneurs’ own capital. Capital decrease
results from the burden of capital taxes and also corporate income tax in case of
capital companies, while in case of personal partnerships – personal income tax on
payments for partners.
Organizational effects of taxation
Organizational effects of taxation can be analyzed in two aspects. Firstly, entrepreneurs must take organizational steps to ensure timely payment of tax obligations.
They refer both to the activities related to one’s own tax obligations (bookkeeping,
making tax declarations or returns, supplying tax information) but also to the performance of the payer’s functions related to transferring taxes collected at source.
Secondly, we should take into consideration the fact that business decisions taken
by entrepreneurs cause definite tax effects. Therefore taxes must be taken into
account in management process, so we should create appropriate organizational
conditions. The organizational problem can be solved in two ways:
58
Market reactions of entities to income tax and managerial decisions

by establishing one’s own tax department or;

by using the services of an external tax advisor (tax outsourcing37).
The above solutions are non-exclusive, as they can be combined. Obviously, the
choice is preceded by the cost and benefit analysis. Especially in small and mediumsized businesses, it is not profitable to keep own bookkeeping and tax offices, as the
costs of organization and maintenance exceed the fees paid to the external service
provider38. In case of bookkeeping and tax outsourcing the main reasons are usually
cost reductions and access to expertise. Reduction of costs not only means lower
expenses (usually it costs less to hire the accounting agency than to employ a fulltime specialist), but also the reduction of costs of applying tax law. The entrepreneur does not feel uncertain and is released from the unpleasant duty of checking
and interpreting the law on his own 39. The tax risk taken by the company also decreases. Tax risk can generally be understood as the risk of possible argument with
tax organs. Depending on the attitude of a given enterprise, the risk can be pure or
speculative40. Pure risk brings only the possibility of incurring a loss, while speculative risk also offers the possibility of gaining some benefits 41. What is more, speculative risk is usually an outcome of a conscious decision – it is taken to gain something, the bigger the risk, the greater potential benefits 42. Thus intentional violating
37
38
39
40
41
42
The essence of outsourcing is to commission some tasks and functions of an enterprise to an external
entity, for example bookkeeping or tax or payroll service. The main reasons for this are: cost reduction, trying to improve one’s competitive position, work specialization, concentrating on core
functions and access to expertise.
Compare: Kanigowska J., Wołowiec T., Koszty stosowania prawa podatkowego w Polsce (wyniki badania).
„Rachunkowość”, 6/2007, pp. 51-55.
Compare: Wołowiec T., Koszty stosowania prawa podatkowego w Polsce (wyniki badania), „Rachunkowość”,
Warszawa, No 6/2007; Tran-Nam B., Evans C., Walpole M., Ritchie K., Tax Compliance Costs: Research
Methodology and Empirical Evidence from Australia. „National Tax Journal”, No 2/2000; Evans C,. Ritchie
K., Tran-Nam B., Walpole M., A Report intoTax payer Costs of Compliance. Australian Government Publishing Service, Canberra 1997; Karbarczyk S., Rynek usług konsultingowych. ISI BOSS Minirap. Sekt.
from 25 January 1999. (http://site.securities.com/doc_pdf?pc=PL&doc_id=4944093); Moody J.: The
Cost of Complying with the Federal Income Tax. Special Report, 2002, No 114. (http://www. taxfoundation.org/ publications /show/133.html
The difference between risk and uncertainty is that the former is measurable. Measuring risk is based
on calculus of probability and variation of possible outcomes: profits and/or losses. In case of speculative risk, we can measure it using data on traceability of tax offences, taking into account only those
that were committed deliberately.
Williams C., Smith M., Young P., Risk Management and Insurance. Irwin McGraw-Hill, 1998, p. 7.
Compare: Ubezpieczenia gospodarcze. Ryzyko i metodologia oceny, collectivework, (ed.) by T. Michalski, C.H.
Beck, Warszawa 2004, p. 92.
Mirosław Cienkowski, Tomasz Wołowiec
59
or dodging the law by the company means taking speculative risk. Pure risk, on the
other hand, refers to entering a conflict with tax organs when:

the activity of a company was unlawful, but this unlawfulness was not intentional (a mistake, ignorance, etc.),

the activity of a company was lawful (usually it is determined by the court or
possibly a higher instance tax organ), but it was not considered as such by
tax organs,

the activity of a company was lawful and was considered as such for some
time by tax organs, but they changed their opinion and the conflict arose.
Both these risks describe potential reality, that is the possibility of entering a conflict with tax organs. Their realization is random, and this is the case of the so-called
double randomness – we do not know the time of the event (conflict) and its depth,
that is effects. These effects are mainly financial (arrears, financial penalties, etc.)
though the company may also lose its credibility. What is important, these two types
of risk are related to uncertainty, each – its different kind. Speculative risk is associated with uncertainty whether unlawful activity will be revealed, while pure risk –
with uncertainty which is an inherent part of the tax system.
Bibliography
[1] Agnell J., Persson M., On the Analytics of the Dynamic Laffer Curve, Department of
Economics, Uppsala University, The Institute for International Economic
Studies, Stockholm University, Uppsala - Stockholm 2000.
[2] Cullen J.B., Gordon R.H.,Taxes and Entrepreneurial Activity: Theory and Evidence for
the U.S. National Bureau of Economic Research, Working Paper 9015, Cambridge, June 2002.
[3] Domar E.D., Musgrave R.A., Proportional Income Taxation and Risk Taking,
“Quarterly Journal of Economics”, 58, 1944.
[4] Evans C., Ritchie K., Tran-Nam B., Walpole M., A Report into Taxpayer Costs of
Compliance. Australian Government Publishing Service, Canberra 1997.
60
Market reactions of entities to income tax and managerial decisions
[5] Gentry W.M., Hubbard R.G., The Effects of Progressive Income Taxation on Job Turnover National Bureau of Economic Research, Working paper 9226, September
2002.
[6] Gentry W.M., Hubbard R.G., Tax Policy and Entry into Entrepreneurship. Mimeograph, Columbia University, July 2002.
[7] Goolsbee A., What Happens When You Tax the Rich? Evidence from Executive Compensation, National Bureau of Economic Research, Working Paper 6333, Cambridge, December 1997
[8] Gruber J., Saez E., The Elasticity of Taxable Income: Evidence and Implications, National Bureau of Economic Research, Working Paper No 7512, Cambridge,
January 2000.
[9] Hundsdoerfer J., Jamroży M., Wpływ podatków na decyzje inwestycyjne
przedsiębiorstwa, „Przegląd Podatkowy”, No 11/1999.
[10] Konieczny J., Działania adaptacyjne przedsiębiorstwa, „Przegląd Organizacji”, Nr
6/2001.
[11] Kudert S., Jarmoży M., Optymalizacja podatkowania dochodów przedsiębiorców. ABC
WoltersKluwerbussiness, Warszawa 2007; Sokołowski J., Zarządzanie przez podatki. PWN, Warszawa 1995.
[12] Leibfritz W., Throinton J., Bibbee A., Taxation and Economic Performance, OECD
Working Paper No. 176, Paris 1997.
[13] Mendoza E., Milesi-Ferrati G.M., Asea P., On the effectiveness of tax policy in altering
long-run growth: Harberger’s super neutrality conjecture, “Journal of Public Economics”, 63/1997.
[14] Moody J., The Cost of Complying with the Federal Income Tax. Special Report, 2002,
No 114. (http://www.taxfoundation.org/publications/show/133.html)
[15] Myles G.D., Taxation and Economic Growth. Institute for Fiscal Studies and University of Exeter, September 1999.
[16] Penc J., Leksykon biznesu, Placet, Warszawa 2002.
[17] Tran-Nam B., Evans C., Walpole M., Ritchie K., Tax Compliance Costs: Research
Methodology and Empirical Evidence from Australia. „National Tax Journal”, No
2/2000.
Mirosław Cienkowski, Tomasz Wołowiec
61
[18] Williams C., Smith M., Young P., Risk Management and Insurance. Irwin McGrawHill, 1998.Ubezpieczenia gospodarcze. Ryzyko i metodologia oceny, collectivework, (ed.)
by T. Michalski, C.H. Beck, Warszawa 2004.
[19] Wołowiec T., Koszty stosowania prawa podatkowego w Polsce (wyniki badania), „Rachunkowość”, Warszawa, Nr 6/2007.
Summary
Key words: market, income tax, managerial decisions
Taking rational decisions in a company, both current and strategic ones, requires knowing and
taking into consideration external conditions of conducted activity. The accuracy of decisions made,
as well as ability to adjust to changing external environment determines not only the effectiveness of
the enterprise’s operations, but also its ability to conduct further activity. In a proper business
environment, the significance of feedback consists in adjusting reaction to received information on
effects of actions.
Rynkowe reakcje podmiotów gospodarczych
wobec podatku dochodowego i decyzji zarządczych
Streszczenie
Słowa kluczowe: rynek, podatek dochodowy, decyzje zarządcze
Podejmowanie racjonalnych decyzji w firmie, zarówno tych bieżących i strategicznych, wymaga
wiedzy oraz warunków prowadzonej działalności. Trafność decyzji podejmowanych, jak również
zdolność do dostosowania się do zmieniających się warunków zewnętrznych, określa nie tylko
skuteczności działalności gospodarczej, ale także jego zdolność do prowadzenia dalszej działalności. W środowisku biznesowym, znaczenie opinii polega na regulacji reakcji na otrzymane informacje o skutkach działań.
ZESZYTY NAUKOWE
Uczelni Warszawskiej im. Marii Skłodowskiej-Curie
Nr 4 (46) / 2014
Piotr Skłodowski
Maria Skłodowska-Curie Warsaw Academy
Anna Bielska
Warsaw University of Technology, Faculty of Geodesy and Cartography,
Department of Spatial Planning and Environmental Sciences
The role of soils
in sustainable development of rural areas
Introduction
The term “sustainable development” in its contemporary meaning was first used
in the Club of Rome’s 1972 report “The Limits to Growth” and has been a key
issue of many UN conferences. The basic aim of sustainable development is the
protection of finite natural resources or resources that are difficult to renew. Hence
exploitation thereof should take into account the needs of subsequent generations.
The common sense should be applied and solutions should be sought that allow to
substitute the resources that are finite or difficult to renew with other means. This
pertains, in particular, to resources from which energy is generated.
Sustainable development of rural areas is a vital issue in Poland. Such areas cover
approximately 93% of the country. Due to their special agronomic, natural and
landscape values they constitute multifunctional space. The basis of sustainable
development of rural areas is the close link between the directions of economic
exploitation of natural resources, especially soils, and spatial and environmental
policy. It is mainly aimed at improving spatial structure of farms, as well as structure
64
The role of soils in sustainable development of rural areas
and quality of agricultural production [Dębicki R., Skłodowski P., 1990] [Krasowicz
S. et al., 2011]. Adjustment of the final use of agricultural lands to natural conditions, so that it facilitates economic growth of a given area but, simultaneously, does
not adversely affect the environment, is a vital issue [Blum, 2005]. This could be
achieved by prudent local spatial planning process and, in a more detailed manner,
by rural management works, mainly the land consolidation and exchange proceedings.
The concept of sustainable development pertains also to agriculture, which is
one of the basic economic sectors in rural areas. UN Conference on Environment
and Development held in Rio de Janeiro in 1992 was a milestone in development of
this idea. Due to its multidimensional character the concept and the development of
sustainable agriculture depend on local conditions and are determined by the following factors [Raman S., 2006]:

environmental (climate, soil quality, relief),

socioeconomic,

cultural (culture, tradition, ethical standards).
Generally, sustainable agriculture is a system of production that harmoniously
takes advantage of technical and biological progress in tillage, fertilisation and crop
protection [Winawer Z., 2013]. Industrial means of production are used moderately
in sustainable agriculture. The purpose is to obtain stable and appropriate profits
from production in a way that does not threaten the natural environment
[Runowski H., 2000]. According to Feber (2001) sustainable agriculture pursues
simultaneously and harmoniously environmental, ecological and social aims. Corresponding to the European Union approach sustainable agriculture brings together
production goals and requirements of natural environment protection and constitutes the preferred cultivation system.
Any decision regarding the change of the given area designation or directions of
development should be preceded by inventory of natural resources, including, without limitation, valorisation of soils and analysis of the land use structure, which
allows for improvement of economic effects that result from transformation of the
Piotr Skłodowski, Anna Bielska
65
relevant areas. Therefore, economic conditions should also be taken into consideration.
The aim of the research was the analysis of soil conditions for the purposes of
drafting a plan of the functional and spatial structure that is developed during the
land consolidation proceedings. It was assumed that soil conditions determine, in
a considerable degree, the manner of use of agricultural lands. They should also be
carefully considered in field-forest boundary determination, as well as for the purposes of designating lands for development. This will have a significant positive
impact on sustainable and multifunctional development of rural areas.
Survey area and research method
The research was performed in several Mazovian communes, i.e. Pomiechówek,
Stromiec and Somianka. Detailed plans were drafted for the following geodetic
units: Błędowo (411 ha, Pomiechówek commune), Płudy Stare (294 ha, Somianka
commune) and BiałaGóra (351 ha, Stromiec commune).
The following documents were examined:

land and building registration database,

soil agricultural maps at a scale of 1:5,000 together with annexes thereto,

soil classification maps at a scale of 1:5,000,

orthophotomaps,

communal studies of conditions and directions of land use,

local development plans.
On the basis of cartographic and descriptive materials spatial analysis was performed with the use of Quantum GiS software. This analysis allowed for designating the best potential functions for the investigated areas. Detailed knowledge of
soil conditions accompanied by careful spatial analysis allowed to eliminate spatial
conflicts from the drafts and reduce the negative impact of the land use on natural
environment and socio-economic conditions.
66
The role of soils in sustainable development of rural areas
Results and discussion
Knowledge of soil conditions in a given area significantly influences the development of the optimal functional and spatial structure thereof. In order to achieve
the optimal use of lands, meticulous analysis of soils, land use, as well as of social
and economic conditions has to be performed. Having considered valuation of
natural resources and of agricultural suitability, as well as the physico-chemical
properties of soils, the investigated area was spatially divided from the functional
perspective into separate areas designated for (fig. 1):

sustainable agricultural production,

organic food production,

production of energy crops,

transformation of arable lands into forests or permanent pasture,

development.
Fig. 1. Functional and spatial structure of Płudy Stare geodetic unit
Source: own data, prepared on the
basis of information from land
register and the soil agricultural
map.
Piotr Skłodowski, Anna Bielska
67
The best soils in a given area, taking into account quality class and agricultural
suitability, should be designated for the purposes of sustainable agricultural production. In some regions of Poland (e.g. lubelskie voivodship) these will be soils
included in I, II, IIIa and IIIb valuation classes of wheat agricultural suitability
complexes (1, 2, 3) and of very good rye complex (4). In other regions (e.g.
mazowieckie voivodship) these can also be poorer soils, included in classes IVa and
IVb, or even V, agricultural suitability of which is still good and which belong to
rich and poor rye complexes (5 and 6) and to strong and weak cereal-fodder complexes (8 and 9). These are soils of lower quality and, therefore, the crops will be
poorer and fertilisation will be necessary. Nevertheless, in planning activities with
respect to areas, where poor and very poor soils prevail and the agricultural function
is still important and constitutes the main source of income for the residents, even
the poorer soils have to be protected against exclusion from agricultural production.
Poor quality soils, included in poor rye (6) and poor cereal-fodder complexes, as
well as some soils of good rye complex (5) (fig. 1) should be designated for organic
food production and development of agro-tourism. It should be emphasized, however, that only the farms, in which grasslands and permanent pasture cover at least
30% of the farm area, have a real opportunity to develop ecological agriculture
[Prokopowicz, Okruszko, 1997]. Such farms may completely exclude chemical
fertilisers in order to use organic matter and still obtain satisfactory crops. Due to
higher costs of ecological production, its development should be accompanied by
promotion of agro-tourism and recreational function. Furthermore, lands designated for organic food production cannot be located in immediate proximity to areas
where chemical fertilisers and pesticides are used. Designation of recreational areas
requires additional analysis of land use and relief. Soil type and particle size are also
important. Suitability of land cover for recreational purposes is assessed taking into
account the following soil features: organic matter content (organic or mineral
soils), texture, type and kind of soil. Light soils are the most attractive, while organic
soils, particularly hydrogenic ones, are of little use for tourism and recreation
[Hopfer, Cymerman, Nowak, 1982].
68
The role of soils in sustainable development of rural areas
Designation of lands for production of energy crops (biofuels) may soon become
an important issue. Weak agricultural soils should be devoted to such crops. Generally, this pertains to class V and VI soils included in complexes 6 and 9, bearing in
mind that soils of the 6th complex (poor rye) are periodically or permanently too
dry, while soils of the 9th complex (poor cereal-fodder) too moist, so plant species
have to be selected carefully. Chemically contaminated soils should also be designated for biofuel crops. In the investigated area the lands located in immediate
vicinity of a national road were designated for such production (fig. 1).
The research proves that soils of the 7th complex (very poor rye) (mainly Arenosol) are characterised by low productivity. They are formed from sands under shallow layers of gravel and loose sand. They have poor sorption properties, which
hinders sorption of nutrients and results in low efficiency of fertilisation. They are
acidic or very acidic, permanently too dry. Therefore, taking into account economic
and ecological conditions, the weakest soils, included in complex 7 – very poor rye,
should be successively forested. Exceptionally, enclaves of the 6th complex – poor
rye, can also be forested for the purposes of rational determination of the fieldforest boundary. Nevertheless, the social factor has to be considered as well. It is
difficult to imagine forestation of all farms in a village or a bigger area, where people
live and work and agriculture constitutes their main source of income. In such cases, alternative solutions have to be sought, e.g. introduction of ecological agriculture
[Skłodowski et al., 2005].
Transformation of arable lands into permanent pasture should pertain to nondrained soils of complexes 8 (strong cereal-fodder), 9 (poor cereal-fodder) and 14.
Transformation of arable lands of excessive moisture content into grasslands or
pasture should take into account the content of permanent pasture in the total area
of agricultural lands, as well as contingent transformation for the purposes of ecological agriculture.
Soils of weak classes (V and VI) included in dry and too dry complexes (6 and 7)
should be designated for development or afforestation. This will allow to avoid the
costs connected with exclusion of better quality soils from agricultural production,
which result from the Act of 3 February 1995 on protection of arable and forestry
Piotr Skłodowski, Anna Bielska
69
lands [consolidated text Official Journal No. 2004.121.1266, as amended], as well as
the costs of drainage in the case of soils of excessive moisture content.
Conclusions
1.
Information provided by soil agricultural maps at a scale of 1:5,000 consti-
tutes the basis for spatial division of a commune or a geodetic unit into separate
areas functionally designated for:

sustainable agricultural production,

organic food production,

production of energy crops,

transformation of arable lands into forests or permanent pasture,

development.
2.
Knowledge of soils, of their properties, agricultural suitability and spatial
distribution in a given area forms sound basis for correct determination of the relations between agriculture and natural environment.
3.
Crop production, designated for various purposes, remains the basic func-
tion of soils in rural areas, but economic, ecological and environmental conditions
should also be considered.
4.
Soil database developed on the basis of the existing cartographic and de-
scriptive materials that pertain to soils, complemented by additional laboratory
analysis and long-term monitoring of soils, allows to determine the changes in the
soil and natural environment and facilitates exploitation thereof in line with the
sustainable development principle.
References
[1] Blum W.E.H., Functions of Soil for Society and the Environment. Reviews in Environmental Science and Biotechnology. Vol. 4. No 3. s. 75-79, 2005.
[2] Dębicki R., Skłodowski P., The role of soil functioning of ecosystems. Roczniki Gleboznawcze, 1.2, 1.3, s. 5-20, 1990.
70
The role of soils in sustainable development of rural areas
[3] Feber A., Pudełko R., Filipiak K., Borzęcka-Walker M., Borek R., Jadczyszyn J.,
Kozyra J., Mizak K., Świtaj Ł., Ocena stopnia zrównoważenia rolnictwa w Polsce w różnych skalach przestrzennych, Studia i Raporty IUNG-PIB 20, s. 9-27, 2010.
[4] Hopfer, Cymerman, Nowak, Ocena i waloryzacja gruntów wiejskich, Warszawa, Państwowe Wydaw. Rolnicze i Leśne, 1982.
[5] Krasowicz S., Oleszek W., Horabik J., Dębicki R., Jankowiak J., Stuczyński T.,
Jadczyszyn J., Racjonalne gospodarowanie środowiskiem glebowym Polski, Polish Journal
of Agronomy, 7, s. 43–58, 2011.
[6] Raman S., Agricultural sustainability. Principles, processes and prospects, Haworth
Press, Binghamton, NY (USA) ss. 474, 2006.
[7] Runowski H., Zrównoważony rozwój gospodarstw i przedsiębiorstw rolniczych. Rocz.
Nauk. SERiA, t. II, z. 1: 94–102, 2000.
[8] Winawer Z., Produkty regionalne i tradycyjne we wspólnej polityce rolnej, Europejski
Fundusz Rozwoju Wsi Polskiej, 2013, http://innowacyjnaradomka.pl/wpcontent/uploads/2013/06/EFRWPII.PRODUKTY_REGIONALNE_I_TRADYCYJNE_we_WPR.pdf [access:
28.02.2014]
Summary
Key words: functional and spatial structure, spatial planning, soil conditions
The basis of sustainable development of rural areas is the close link between the directions of
economic exploitation of natural resources, especially soils, and spatial and environmental policy.
Adjustment of the final use of agricultural lands to natural conditions, so that it facilitates economic growth of a given area but, simultaneously, does not adversely affect the environment, is a vital
issue. The aim of the research was the analysis of soil conditions for the purposes of drafting a plan
of the functional and spatial structure that is developed during the land consolidation proceedings. It
was assumed that soil conditions determine, in a considerable degree, the manner of use of agricultural lands. They should also be carefully considered in field-forest boundary determination, as well
as for the purposes of designating lands for development. This will have a significant positive impact
on sustainable and multifunctional development of rural areas.
Piotr Skłodowski, Anna Bielska
71
Rola gleb w zrównoważonym rozwoju obszarów wiejskich
Streszczenie
Słowa kluczowe: podział funkcjonalno-przestrzenny, planowanie przestrzenne, warunki
glebowe
Podstawą zrównoważonego rozwoju obszarów wiejskich jest ścisłe powiązanie kierunków gospodarczego wykorzystania walorów przyrodniczych, w tym szczególnie glebowych z polityką
środowiskową i przestrzenną. Dostosowanie docelowego sposobu użytkowania gruntów rolnych do
warunków naturalnych, tak aby użytkowanie ich nie oddziaływało negatywnie na środowisko,
i jednocześnie pozwalało na rozwój ekonomiczny danego obszaru, jest niezmiernie ważnym zagadnieniem. Celem badań była analiza warunków glebowych dla potrzeb opracowania projektu
podziału funkcjonalno-przestrzennego opracowywanego w procesie scalenia. Założono, że warunki
glebowe w znacznym stopniu determinują sposób użytkowania nie tylko gruntów rolnych, ale
również powinny być szczegółowo uwzględniane w kształtowaniu granicy rolno-leśnej czy wyznaczaniu gruntów pod zabudowę. Wpłynie to bardzo korzystnie na zrównoważony, wielofunkcyjny
rozwój obszarów wiejskich.
ZESZYTY NAUKOWE
Uczelni Warszawskiej im. Marii Skłodowskiej-Curie
Nr 4 (46) / 2014
Rafał Grupa
Gdańsk Higher School of Humanities
Total Quality Management
as a philosophy of quality management
Total Quality Management (TQM) refers to management methods used in order
to enhance the quality and productivity in business organizations. TQM is a comprehensive approach to management which operates at the level of the whole organization with the participation of all departments and staff as well as at the front
and the rear of the cycle in order to enable both suppliers and customers to join.
TQM is only one of the many acronyms used for the marking systems of management that are focused on quality:
- CQI – Continuous Quality Improvement.
- SQC – Statistical Quality Control.
- QFD – Quality Function Deployment.
- QIDW – Quality In Daily Work.
- TQC – Total Quality Control1.
Like many other systems, TQM provides the framework for the implementation
of an effective quality and manufacturing solutions which may increase profitability
and competitiveness of organizations.
TQM, in the form of statistical quality control chart, is invented by Walter
A. Shewhart. Initially it is implemented in Western Electric Company, and further
developed by Joseph Juran who works there using this method. TQM is demon1
Total Quality Management, http://www.inc.com/encyclopedia/total-quality-management 18.11.2013.
74
Total Quality Management as a philosophy of quality management
strated on a large scale through the intervention of Japanese industry by W. Edwards Deming, thanks to his missionary work in the United States and around the
world; Deming, as a consequence, begins to be seen as a “father” of quality control2.
Walter Shewhart, working at that time at Bell Telephone Laboratories, is the first
man to develop a statistical control chart in 1923. It is still called after his name. He
publishes his method in 1931 as Economic control of quality of manufactured product. This
method is introduced for the first time in Western Electric Company's Hawthorn in
1926. Joseph Juran is one of the people trained in this technique. In 1928 he writes
a booklet entitled Statistical methods used for production problems, which is later incorporated in the AT & T statistical manual of quality control. In 1951 Juran publishes his
Quality control manual, significant indeed for contemporary science.
In 1951 W. Edwards Deming, trained as a mathematician and statistician, goes to
Japan at the request of the US Department of State so as to help Japan prepare
population census. The Japanese, being already aware of Shewhart's method in
statistical quality control, invite Deming to give a lecture on this topic. A series of
lectures is held in 1950 under the auspices of the Union of Japanese Scientists and
Engineers (JUSE). Deming develops critical production methods in the USA during
World War II, and, in particular, the method of quality control. The executive level
and engineers control the entire process while production workers play a minor role
in it. In his lectures on SQC, Deming promotes ideas together with the technique,
presenting significantly the average employee’s involvement in the quality process
and the application of new statistical tools. Japan is open to his ideas and
commences to implement the process which becomes known as TQM. In 1954
they also invite Joseph Juran to give lectures – he is welcomed quite enthusiastically3.
The use of these methods in Japan is of significant and undeniable results,
appearing as a dramatic increase in the quality of the Japanese product and its success in exports. This leads to the spread of quality movement throughout the whole
2
3
Ibidem.
Ibidem.
Rafał Grupa
75
world. During the 1970s and 1980s, American producers try to adopt quality and
performance techniques which could restore their competitiveness. Deming's approach to quality control becomes recognized in the United States, and Deming
himself becomes sought after as a lecturer and author. Total Quality Management,
the expression used to quality initiatives created by Deming and other management
gurus, becomes the core of the American company at the end of the 1980s 4.
Different consultants and different schools emphasize different TQM aspects (as
it develops in time). These aspects may be technical, operational, social or managerial.
Basic TQM principles, as the American Society for Quality Control puts it, are:
Prevention
No defects
First things first
Quality affects
everyone
Continuous improvement
Employees’
involvement
Prevention is better than treatment. In the long term, it is
cheaper to stop producing defective products than try to find
them.
The ultimate goal is lack (zero) of defects or an extremely low
level of defects if the product or service is complicated.
It is better not to produce at all than produce something defective.
Quality is not only the concern of production or operations
department, it applies to all departments, including marketing,
finance and human resources.
Companies should always look for ways to improve the process
of quality improvement.
Any worker involved in the production and operations department plays an important role in the planning of possibilities to
improve quality and identifying any problems associated with it.
Source: Own study based on Jain P.L., Quality Control and Total Quality Management, Tata McGraw-Hill
Publishing Company Limited, New Delhi 2001.
A true quality source, the invention on which everything is based, is statistical
quality control. In a nutshell, this method requires the core of quality standards to
be set first by the establishment of some measurements for a given product and to
define what constitutes quality. The measurement may be size, chemical composi-
4
Ibidem.
76
Total Quality Management as a philosophy of quality management
tion, reflection, etc., any, in fact, measurable objects. Tests are carried out in order
to establish the variation of basic measurement (up or down) which are acceptable.
This record of acceptable results is next shown on one or several Shewhart’s charts.
Quality control begins in the process of production. Samples are collected and
measured immediately and on a regular basis with the results of these measurements
recorded on a graph. If the measurements move beyond the frequency (standard) or
show undesirable trend variables (up or down), the process is stopped and production halted in order to explain the reasons for such variances until they are found
and corrected. Therefore, SQC, in contrast to TQM, is based on continuous sampling and standard measurement and immediate remedial action, if measurements
differ from acceptable standards in this area5.
TQM stands for SQC plus all other elements. Deming sees these elements to be
basic and essential for the implementation of TQM. In his book Out of the crisis
(1982), he confirms that companies in order to create a business environment as a
whole, should place emphasis on the improvement of products and services rather
than invest in short-term strategies so as to achieve financial benefits. He argues
that if the management starts to follow this philosophy, various aspects of any organization activities, starting from training to the improvement of the relations
between the manager and the employee (the interpersonal relations) become much
more appropriate and, ultimately, more effective both for employees as well as the
entire organization. Deming is disdainful of those companies which, on the basis of
profits, take economic decisions emphasizing the quantitative factor over the qualitative one, and he believes that a well thought-out system of statistical process control can be an invaluable TQM tool6.
It is difficult to identify the term Total Quality Management unambiguously. One
can try to identify it as a popular quality management concept. However, this is
much more than just to ensure the quality of product or service. TQM stands for
a business philosophy – the way it works. It describes the ways of managing people
and business processes in order to ensure full customer satisfaction at each stage.
5
6
Ćwiklicki M., Obora H., Metody TQM w zarządzaniu firmą, Poltext 2009.
Total Quality Management, http://www.inc.com/encyclopedia/total-quality-management 18.11.2013.
77
Rafał Grupa
A note summarising the main characteristics of TQM can be a phrase “Do good
things for the (very) first time”7.
It can be summarized that Total Quality Management is a management system
for the customer-oriented organization that involves all employees in the process of
continuous improvement. It uses strategies, data and effective communication in
the integration of quality discipline in culture and activities of the organization.
TQM principles are best expressed graphically, as a certain cycle of the organization’s operations.
Development and
involvement of employees
Continuous learning
and improvement
Management by
processes and facts
Leadership
Development of partnerships
Social responsibility
Customer focus
Focus on results
Source: Own study based on Goetsch D. L., Davis S. B., Quality Management for Organizational Excellence.
Introduction to Total Quality, Person Education International, Prentice Hall, USA 2010 and Total Quality
Management, http://www.inc.com/encyclopedia/total-quality-management, 18.11.2013.
In general, TQM is made up of several basic elements, such as for example:

Customer focus: the client ultimately determines the level of quality. No
matter what the organisation does so as to support quality improvement –
that is whether it trains its employees, includes quality in the design process,
7
White M.L., Doing the right things the first time (Mellon Bank Corp’s Total Quality Improvement Process)
Copyright 2002 Trust & Estates, September 1, 1993.
78
Total Quality Management as a philosophy of quality management
upgrades computers or software, or buys new measurement tools, it is the
customer who determines whether these efforts are worth it.

Joint involvement of employees: all employees participate in work with the
aim of achieving common objectives. Mutual involvement of employees can
only be achieved once fear and unpleasant atmosphere at work are removed,
and the board introduces proper work environment.

Centralisation of processes: an essential part of TQM is the focus on process-oriented thinking. Any process is a series of steps, initiated both by the
supplier (internal or external) which are converted into high-performance
results provided to customers (again, internal or external). The steps, required for the implementation of any process, are set out, and the means
measured and continuously monitored in order to detect unexpected changes.

Integrated system: the organization may consist of many different functional
departments organized structurally: vertically or horizontally, but always
TQM is at its heart. Everyone must understand the vision and mission of
the company and its prevailing principles such as quality policy, objectives
and critical processes of the organization. Economic performance must be
monitored and reported on a continuous basis. The integrated system can be
modelled on the basis of the evaluation criteria of Baldrige National Quality
Program and included in, for example, ISO 9000 series norm standards 8. Every organization has a unique organisational culture and it is virtually impossible to achieve perfection in their products and services as long as it is not
properly developed. Therefore, the integrated system combines economic
improvement exceeding expectations of customers, employees and any
other persons concerned.

Systematic and strategic approach: a key element in quality management is
strategic and systematic approach to achieve a vision, mission and objective
of any organization. This process, known as strategic planning and strategic
8
2011 – 2012 Criteria for Performance Exellence, The Malcolm Baldrige National Quality Award, National
Institute of Standards and Technology, U.S Department of Commerce.
Rafał Grupa
79
management, includes the formulation of strategic plan which links quality
as a key element.

Continuous improvement: the main aim of TQM is a continuous improvement of the process. Continuous improvement of the organization is based
on finding both analytical and creative ways in order to become more competitive and more effective in meeting expectations of the parties concerned.
Large profits are made by small, sustainable improvements which take place
during a long period of time. This approach requires a long-term approach
by managers and the willingness to invest here and now for the sake of
future benefits.

Fact-based decision-making: So as to know how well the organization performs, the data concerning the performance of individual measurements are
needed. TQM requires that the organization continually seeks and analyzes
its data in order to improve accuracy, decision-making, and reach an agreement and allow predictions based on the past.

Communication: at the times of organizational changes as well as daily work
responsibilities, effective communication plays a major role in maintaining
morale and employees’ motivation at all levels. Communication includes
strategies, methods and deadlines9.
These elements are considered to be very important for TQM and many organizations define them as a whole – a set of fundamental values and principles on
which the organization is to operate.
TQM is not an easy concept to be implemented in any business, particularly for
those who, traditionally, are not interested in customers’ needs and business processes. In fact, many of the attempts oriented at introducing TQM fail. One of the
reasons for this is that it has serious consequences for the whole industry. For
example, it requires managers to listen to their employees during production processes in which they participate. In continuous improvement culture, the views of
staff are invaluable. The problem is that many businesses create involvement barri-
9
Total Quality Management, http://www.inc.com/encyclopedia/total-quality-management, 18.11.2013.
80
Total Quality Management as a philosophy of quality management
ers, the example of which might be those when managers feel that their authority is
challenged.
In contemporary context, TQM requires participatory management, continuous
improvement of processes and involvement of all organization’s departments.
Participatory management refers to individual involvement of all members in the
management process. In other words, managers should follow company policies
and make key decisions in such a way so as to take into account the views of their
subordinates. This is an important incentive for workers who begin to feel that they
have control over and responsibility for the process in which they participate. As it
can be seen from the above considerations, TQM, stressing “quality” in its name, is
a management philosophy indeed.
To get the best results, TQM requires long-term cooperation and a holistic approach to business.
Bibliography
[1] Deming W. E., Out of the Crisis, MIT Center for Advanced Engineering Study,
1986,
(http://books.google.co.uk/books?id=LA15eDlOPgoC&printsec=frontcover
&source=gbs_ge_summary_r&cad=0#v=onepage&q&f=false
[2] Ćwiklicki M., Obora H., Metody TQM w zarządzaniu firmą, Poltext 2009.
[3] Goetsch D. L., Davis S. B., Quality Management for Organizational Excellence.
Introduction to Total Quality, Person Education International, Prentice Hall, USA
2010.
[4] Jain P.L., Quality Control and Total Quality Management, Tata McGraw-Hill
Publishing Company Limited, New Delhi 2001.
[5] Montgomery D. C., Statistical Quality Control, Fifth Edition, John Wiley & Sons
Inc. USA 2005.
[6] White M.L., Doing the right things the first time (Mellon Bank Corp’s Total Quality
Improvement Process), Copyright 2002, Trust & Estates, September 1, 1993.
Rafał Grupa
81
[7] 2011-2012 Criteria for Performance Exellence, The Malcolm Baldrige National Quality
Award, National Institute of Standards and Technology, U.S Department of
Commerce.
[8] Liker J. K., The Toyota Way: 14 management Principles from the World’s Greatest
Manufacturer, McGraw – Hill, USA 2004.
[9] http://deming.org
[10] http://www.inc.com/encyclopedia/total-quality-management.
Summary
Key words: TQM, management, quality, efficiency
This article presents Total Quality Management (TQM) as one of the basic tools to increase
the efficiency of business performance. It presents the history of the creation of TQM and its
development, as well as characterizes the components that affect the functioning and impact on the
company structure. Nowadays, in order to win a market and become its leader, one has to pay
special attention to products and services quality – and not just their quantity as only in this way
competitiveness in a globalized world can be increased.
Total Quality Management jako filozofia zarządzania jakością
Streszczenie
Słowa kluczowe: TQM, zarządzanie, jakość, efektywność
Niniejszy artykuł przedstawia Total Quality Management jako jedno z podstawowych narzędzi mające na celu zwiększenie efektywności wydajności przedsiębiorstwa. Zaprezentowano historię
powstania TQM oraz jego rozwój, a także scharakteryzowano części składowe mające wpływ na
jego funkcjonowanie oraz oddziaływanie na struktury firmy. Zaprezentowano, iż w obecnych
czasach aby zdobyć rynek oraz stać się jego liderem, należy zwracać szczególną uwagę na jakość
produktów oraz usług a nie tylko na ilość, jedynie w taki sposób można zwiększyć swoją konkurencyjność w zglobalizowanym świecie.
ZESZYTY NAUKOWE
Uczelni Warszawskiej im. Marii Skłodowskiej-Curie
Nr 4 (46) / 2014
Maciej Kiedrowicz
Military University of Technology,
Maria Sklodowska-Curie Warsaw Academy
The importance of an integration platform
within the organisation
Introduction
Not every organisation is developing in such a way as its owners would like it to
develop. In many of the organisations there are frequent discussions and meetings
which lead to the impression that people in the company are working harder, yet
the economic results are getting worse (or at least - do not improve). There is demand on prominent experts. Investments are carried out in a cautious and safe way
in consideration of market developments and changes in customer behaviours. But
not all organisations experience that kind of problems - other companies, operating
on similar or on the same markets, develop or grow much faster. What is or what
might be the reason for this kind of situation? Perhaps the other companies are
more agile, use integrated business processes, and their IT systems provide for
flexibility through the application of integration platform. Maybe these companies
use in their processes such IT technologies that help in a more reliable and efficient
way to implement basic operations concerning the conducted business activities.
The IT technology, therein the integration platform, convert into assets in this kind
of projects and they create basis for providing the company with flexibility and
efficiency.
84
The importance of an integration platform within the organisation
Agility and flexibility
From the point of view of strategy, flexibility in business is essential. Globalisation, the EC rules, and the need to shorten the cycle of introducing new products to
the market are forcing agility by giving the possibility to rapid changes in business
processes. Please keep in mind that organisations operate in a specific legal environment, which usually makes it necessary to adapt to a variety of legal requirements and entails certain expenditures. These expenditures do not generate an added value for businesses. All these elements create the need to improve business
flexibility.
According to different studies, the level of complexity of the systems used by
organisations is increasing, which can lead to discontinuation of the business activities of these organisations. The complexity of the systems does not provide any
added value from the point of view of the enterprise. One of the first steps leading
to flexibility is usually the introduction of integrated, standardised, and computerised processes. These undertaken actions are often costly, long-term, and often
entailing organisational changes, but on the other hand they bring benefits in the
long term.
Scenario
According to practice, computerisation supporting core business processes allows to achieve higher performance, and IT investments generate greater value for
these companies. Concentration of investments has a similar impact on IT investments. The most common scenario to prevent threats is composed of the following
steps: strategy development, development and implementation (or purchase) of
appropriate IT systems. The biggest problem resulting from this type of action is
that each action is repeated from the beginning on. This approach has many drawbacks based inter alia on the following: an unclear strategy (according to which it is
difficult to develop and implement the appropriate IT systems), the adopted strategy is implemented sequentially and/ or partially (it is not uncommon that for each
of its components there are different IT solutions required), and the reaction of the
Maciej Kiedrowicz
85
IT department taking generally into account the last initiative with regard to the
change in strategy.
The main effect of this approach is the formation of the so-called business silos.
This solution does not form the basis for the company activities, and the data used
by this company are incomplete and outdated. Solution for a single silo works great,
but any attempt to create common solutions entails a lot of complications. Moreover, there is often no attempt possible to coordinate business processes. A partial
solution to this problem is data warehouses (thematic or covering all activities of the
company). However, the wholesalers only give the possibility to obtain data from
different systems, but they do not ensure the flow of data among these systems.
This problem can be dealt with by an appropriate approach to the implementation
of systems, i.e. the approach that will support the business processes in the organisation and will provide a basis for future operations of the system.
The basis is formed by a deliberate choice of the most relevant processes and IT
systems which have to be integrated and subject to standardisation. It is also important to create a proper activities scheme, the selection of mechanisms and tools
for both management and technology.
Basis for activities
The organisation should develop appropriate standards for its operating to
effectively create and utilize the basis for activities. These standards include the
following: the scheme of operation of the organisation (i.e. so-called operating
model), the corporate architecture, and the IT governance.
These standards should include the maintenance and development principles as
well as cooperation rules. The basis for organisation operation should be revised
and developed in a predictable and planned way. Creation of the organisational
basis cannot be limited only to the complex and individualised functions. This
creation requires generally computerisation and improvement of usually routine and
daily processes that must be implemented in a proper and efficient way. At the same
time, computerisation connected to the key elements of company activities increa-
86
The importance of an integration platform within the organisation
ses the agility of the company. The side effect can be a significant decrease in the
flexibility of core business processes in the organisation.
The operating model includes participation and influence on carrying out the
activities of two key components – integration and standardisation of business
processes. Standardisation of business processes means definition of the exact way
in which the process has to be conducted notwithstanding by whom and where it is
performed. Standardisation is reflected in enhancement of efficiency and it provides
for greater predictability of the organization activities. Due to the greater predictability, the innovation can be limited. The integration is based primarily on how data
is exchanged between departments operating within the organisation. Due to the
proper exchange of data among organisation departments, the integration within
the organisation is improving, and the result of that is improved customer service.
The result of integration is higher efficiency, better coordination, greater transparency, and the ability to adapt to changing conditions.
The integration improves data flow and the following flow of information. The
widespread availability of information determines that the organisation has to develop standardised definitions and data formats so that various organisational and
functional departments will have the opportunity to share these data. Such decisions
can be very difficult and can absorb a lot of time. Of particular importance is the
integration that allows standardisation of processes and at the same time causes that
the data from individuals at various levels shall be understandable consistently.
The use of integration platform in business processes is not easy, which is why it
is best to apply the sequential approach. If application of integration platforms is be
integrated into the life cycle of the project, they will not only fulfil the current
needs, but they will also contribute to the expansion and development of the organisation potential. The use of an integration platform provides for an increase in
consistency among business goals and IT objectives. It also provides for the possibility of coordination among business priorities and decisions related to the implementation of IT projects concerning these priorities.
After reaching the advanced standardisation and integration of invariable elements (or slowly changing), the attention focuses on those elements that change
Maciej Kiedrowicz
87
frequently – which results in a strategic planning of creation of an integration platform, which in turn results in a significant increase in flexibility. The introduction of
an integration platform has a significant impact and affects many levels and aspects
of the enterprise. The proper basis for the company activities and the existence of
an integration platform give the possibility to dispose of the proper information at
the right time, and the ability to smoothly adapt to the changing environment.
The IT governance means the standards of cooperation with the IT department
which have to create mechanisms to ensure that technological and business projects
will achieve the local as well as the general objectives of the organisation. Generally,
the cooperation during the project implementation in large organisations involves
three parties: the company's management, the directors of business units, and the
project director. At the highest level the management sets up the direction of activities and develops incentives motivating to pursue the objectives of the whole organisation. The results of business units ensure their directors. The projects directors
typically focus on the success of the project. There are two important aspects in this
kind of model of cooperation, nameles: compliance of the activities and coordination. The IT governance requires the use of the best practices related to the tools
and techniques of project management, ensuring that projects take into account the
objectives and priorities of all parties involved. Achieving compliance of activities
related to technology and business will ensure that investments in IT will bring
added value. Coordination of arrangements in the IT and business departments
shall be an integral part of any management process. Model in which the IT governance is connected to the supervision of the project provides for coordination and
adjustment of activities.
The IT governance determines the structure of decision-making powers and responsibilities, giving the opportunity to the correct approach to the use of IT technologies. It focuses on the management and use of computerisation in order to
achieve its overall organisational results. Please keep in mind that information technology is closely linked to other key enterprise assets, such as financial assets,
human resources, work force, and know-how.
88
The importance of an integration platform within the organisation
Standardisation and integration of business processes
The organisation which aims to create a strong basis for its activities needs - in
addition to the detailed definition of the level of integration and standardisation - an
integration platform, which will create the basis for decisions related to the management of the organisation. It takes into account its respective levels and requirements set up before the integration and standardisation of business processes. The
planned creation and use of an integration platform provides for a basis for computerisation of the core activities of the organisation. The integration platform
provides for an opportunity for the logical order of business processes and IT infrastructure. Its essence is to show these business processes, data resources, and information technology, which will lead to implementation of the strategy of the
organisation development.
Unfortunately, only IT departments are dealing quite often with all the elements
of the construction of the integration platform. Creating a platform should begin
with defining by the management of the organisation the level of standardisation
and integration being pursued. Defining of key customer groups, key business processes, resources of data and technologies that have to be subject to standardisation
and integration requires the determination of an appropriate course of action.
The platform should include the overarching logic connected to the requirements of the business processes, and only later to the possibilities of modern information technology. In matters of technology is the IT department a selfsufficient one, however, in order to ensurs an organization's needs met fully were,
the IT department should be informed about how to implement each process and
about the scope of the data used in these processes. Solutions created exclusively by
IT departments will not be practically used.
Advancement levels of integration platforms
Modern technologies are blurring the boundaries between disciplines, and increasing globalisation creates new opportunities for business development. Creation
of basis for the business activities requires implementation of changes in key business processes and IT systems. New systems have to be designed, and new business
Maciej Kiedrowicz
89
processes and technologies are to be implemented. The integration platform should
be designed and implemented according to strict rules to ensure business continuity.
Different levels of advancement levels can be distinguished with regard to the
created platform appropriate for logic of cooperation and methods to cross these
levels. The following levels can be distinguished: business silos, technology standardisation, activities optimisation, and business components. Generally, organisations are creating an initial state, and then they are moving to the next more
advanced levels of platform structure and its use by improving their basis for activities.
Organisations gradually transform business processes and change approach to
investment in IT technology by creation of an integration platform. At the level of
maturity of business silos, basic investments concerning IT infrastructure relate to
the use of opportunities and to solution of problems in a local scope. The role of IT
departments is limited to the automation of chosen business processes. Systems
created and implemented at this level provide for a full implementation of business
needs. The systems are designed and implemented for use in structures of functional departments of the organization (hence the name "silos"). This approach does
not introduce restrictions on the conduct of the business of the organisational unit,
it can also encourage innovation. However, such solutions cause that the operating
IT systems cannot cooperate with each other. At this level, the IT departments
achieve high proficiency in systems integration, but combining different IT systems
is getting more and more complex, and hence these activities are increasingly apparent actions. Over time, the major systems have among them enough intermediate
links, that any attempt to amend it becomes very risky, and simultaneously costly
and time consuming. The main disadvantage of this solution is the problem of the
integration and standardisation of business processes. After some time, however,
there is a need to improve the efficiency of IT technology and the need to create for
the data resources and business processes a sustainable platform that will support
the business activities. This involves a change in the approach to how to invest in
IT technology. There are accepted technological standards, the number of technology platforms is limited, local solutions are replaced by solutions on the organisa-
90
The importance of an integration platform within the organisation
tional level. The company enters the second level: the standardisation of technology. That leads to much lower costs corresponding to the maintenance of such infrastructure, however, that means that IT solutions are less flexible.
At the level of IT technology standardisation, the role of IT departments is to
computerise business processes at the local level (analogical as for the business
silos). The difference is that the primary criterion becomes the reliability of the
systems and the decrease of costs across the enterprise, and not the functionality of
the respective systems. The main criterion is to raise the level of management of
technological standards. In this case, the most noticeable difference is that the
management allows the IT solutions to have an impact on business solutions. This
reduces risk, increases the level of customer service, increases safety, and improves
reliability. In addition to the consolidation and standardisation of hardware at this
level of maturity, the companies significantly reduce the number of these IT systems that duplicate their functions. Standardisation, however, does not provide for
solutions concerning data resources used by certain systems. They are introduced
solutions for data warehouses or solutions concerning opportunities to share some
data, but the data on operational activities continue to be associated with systems
that directly use them. Further activities related to the implementation of integration
platforms are precisely related to the standardisation of data resources and business
processes used by the entire organisation.
At the level of activities optimisation, the organisation starts to view the data and
systems from the perspective of general organisation rather than local. Extracting
data from different systems for ongoing operations and transactions, and allowing
an access to relevant business processes, the redundancy is reduced to the minimum. Appropriate mechanisms for corporate data that are critical, and (if
necessary) as well for standardised business processes and IT applications are developed. The result is that the basic investments in IT technologies are implemented
with the aim to develop the general organisational systems and the shared data, and
not the local systems and shared ICT infrastructure. Although implementation of
the changes in automated and optimised processes is more cumbersome, development and implementation of new services and products is definitely faster. Data
Maciej Kiedrowicz
91
resources and technology platforms are used for different purposes and they use
business processes that are fully predictable. Very often, the standardisation of data
resources and business processes are most serious efforts to be taken at all levels of
the company hierarchy.
Subsequent levels are associated with increasing flexibility of the organisation
through the introduction of components that are on the one hand multiprocessors,
on the other hand are individualised. More and more complex business processes
underlie computerisation. At this level, the company's management conducts with
directors of IT departments some kind of negotiations, giving the opportunity to
determine which business processes are necessary, which of them should be standardised, and which can be developed at the local scale. If the organisation aims that
the development of the components has to bring tangible benefits, the organisation
must quickly notice the strategic opportunities that in the most appropriate way will
develop its business activities, and then use or develop components which will
enable the development. This approach makes it possible to reap the benefits of
standardisation of IT technology, business process integration and multiple reuse
their existing data resources.
Entering the next level of advancement of the integration platform requires
changes in the way of carrying out business. By changing perspectives from the
local to the global, there is a need to move away from improvements at the local
level and the transition into a global scale. This implies the need for profound
changes in terms of flexibility at various levels of management, and particularly at
the global level. This is visible, above all, in the development level by the components. The local and global flexibility increases. If there is a robust integration platform for core business processes, data resources and technology operating, organisation at every level can use the necessary business components whose implementatation is simple according to the use of standardised and unified interfaces.
Advantages of integration platform
During the creation of the basis for activities, the integration platform is a very
important element because it covers the core business processes, data resources,
92
The importance of an integration platform within the organisation
and IT technologies that will enable the achievement of the required level of integration and standardisation. In the implementation process of the integration platform, organisations can achieve a lot of benefits that are often independent of each
other. After reaching the next maturity levels, there is generally created new improved technology, and new benefits emerge. This is clearly visible in the following
areas: IT costs, efficiency of computerisation response, risk management, management satisfaction, and economic figures. As an integration platform necessitates
some kind of discipline to the processes and systems, companies are starting to
control the excessive costs associated with business silos. Costs are reduced mainly
because the company invests more resources in the shared capabilities of IT technologies, which later are used for other purposes. It may also result from the fact
that at the highest level of integration organisations invest more in innovation. But
at this level, a solid basis for business activities exists. Basic business processes, and
access to key data on products and customers needed by development of few products and services have been automated. New opportunities can be used to improve
core business activities. All these opportunities require investment in IT technology,
but there is a simple way to justify the new investments. Although they increase IT
costs, they create new business opportunities.
During the process of standardisation persons responsible for the computerisation and business directors take fewer decisions concerning technology and therefore they spend less time on technology issues and unexpected technical problems.
The result of this is to reduce both planned and actual time needed to create and
implement new systems. Integration of IT infrastructure, sharing of data resources,
and general organizational systems facilitate management of the IT technologies. It
entails a significant improvement in risk management. The tolerance to the possible
occurrence of the crisis increases and possible losses during economic crises and
disasters chance decrease. Security is improved - the possibility of avoiding the risks
associated with unauthorized accesse to private and confidential data (both external
and internal) increases.
Organisations achieve through this platform very important strategic results: improving operational efficiency, improving customer relationships, increasing
Maciej Kiedrowicz
93
product positions, and greater flexibility in the implementation of the strategy of the
organisation. The possibility of achieving specific benefits with regard to a basis for
business activities may result not only from changes in the approach to invest in IT
technology, but also from the application of new management practices.
Bibliography
[1] Kiedrowicz M., Niedzielski M., Examples of implementation of enterprise architecture Description Framework xGEA; Sobczak A., Szafranski B. (ver.), Introduction to enterprise architecture, WAT, Warsaw, 2009, ISBN 978-83-61486-10 -7.
[2] Kiedrowicz M., Process approach and the services provided by the government, eGovernment, No. 5 (12) / 2007, ISSN 1895-6335.
[3] Kiedrowicz M., Power management - public resources in selected EU countries, Modern
management systems, No. 6/2011, ISSN 1896-9380.
[4] Ross J.W, Weill P., Six decisions regarding IT systems, which should not be made by IT
staff, Harvard Business Review Poland, HBRP 4, 2003.
[5] Ross J.W, Weill P., Robertson D.C., Enterprise Architecture as Strategy, Harvard
Business School Press, 2006, ISBN 1-59139-839-8.
Summary
Key words: integration platform, standardisation, integration, business process
The article discusses aspects related to the essence of integration platforms and their importance
in the development of enterprises. The importance of flexibility and agility of the organisation as
a basis for effective competition in the market are discussed. There were also presented the advancement levels integration platforms and the impact of standardisation and integration of business
processes on the basis for business activities of the organization.
94
The importance of an integration platform within the organisation
Znaczenie platformy integracyjnej w organizacji
Streszczenie
Słowa kluczowe: platforma integracyjna, standaryzacja, integracja, proces biznesowy
W artykule omawiane są aspekty związane z istotą platform integracyjnych i ich znaczeniem
w rozwoju przedsiębiorstw. Dyskutowane jest znaczenie elastyczności i zwinności organizacji, jako
podstawa skutecznego rywalizowania na rynku. Zaprezentowane zostały także poziomy zaawansowania platform integracyjnych oraz wpływ standaryzacji i integracji procesów biznesowych na
fundament działania organizacji.
ZESZYTY NAUKOWE
Uczelni Warszawskiej im. Marii Skłodowskiej-Curie
Nr 4 (46) / 2014
Olena K. Yakymchuk
Vinnytsia Cooperative Institute, Ukraine
Concept of managing regional budgets
during transition to sustainable self-development
Statement of the problem. Under conditions of reforming the whole system of
regional administration and local self-government on the principles of decentralization and transition of authorities to regional powers declared by the government,
the question of civil society development has acquired a strategic importance for the
progress of Ukraine, since further success of reforming governmental control and
bringing it closer to European standards will depend on foundation of viable, active
and responsible civil society institutions.
An important component in addressing these challenges is financial support and
its effective management, the importance of which is growing under conditions of
unstable internal and external environment, limited experience in planning and
optimal use of financial and other resources at the level of regions and communities.
A key element of fiscal policy in FY 2014 in terms of interaction of the state
budget with local budgets is balancing of nationwide and regional interests aimed at
effective use of economic and ecologic potential of each region of the country [20].
In this regard, the efforts of scholars and practitioners aimed at research of
a concept of regional and community development, the formulation of ways to
improve governance, legal framework, regulations, methods of assessment of progress for all indicators of regional management and forms of control over implementation of the plans are urgently needed.
96
Concept of managing regional budgets during transition to sustainable self-development
Overview of recent research and publications. Some methodological, methodical and organizational aspects of the financial management were researched by
local and foreign scientists, such as O.H. Bila [2], O.I. Blank [3], O.O. Beliaev [4],
I.V. Zapatrina [6], O.I. Lunina [9], L.O. Myrhorodska [12], V.S. Katkalo [18], and
E. Yasin [19].
However, a unified approach to development of regional budgets, to transition
of regions and communities to self-governance, and to using a potential of each
region is not available at the moment.
Therefore, a problem of research and theoretical study of a concept for
development of regional budgets with broad involvement of public is still challenging.
Goal setting. To provide a theoretical justification of the concept for regional
self-governance budget.
The body of the topic. The concept of regional budget management, in our
opinion, is a set of goals, methods and tools to influence a defining element of
regional finance subordinated to common understanding of its place and role in
a social-economic development of a region. From this perspective, we can identify
a number of basic features of regional budget.
In fact, a regional budget is a largest monetary fund, which, in accordance with
the principle of fiscal federalism, is managed directly by public authorities of country`s region. This implies a wide range of possibilities to local authorities for making
quite effective decisions aimed at changing the conditions of life in a region.
One more feature of modern regional budget is a task-oriented impact on its
structure and size of revenues that allows radically influence standing of business
entities operating on the regional territory through implementation of certain fiscal
policy, thus, creating the economic environment of the region as a whole, including
its sectoral structure [1, pp. 90-101].
The main feature of a regional budget, in our view, is the fact that the periodic
adjustment of structure and changes in regional budget expenditures in the process
of revenues maneuvering provides a real opportunity for implementation of ap-
Olena K. Yakymchuk
97
proved priorities for development of a region and rational/effective combination of
social and economic components of regional policy.
Moreover, setting a reasonable size of tolerated and required deficit (surplus) of
the regional budget in combination with adoption and implementation of a set of
financial measures (using surplus) forms up an important part of monetary macroeconomic incentives in trends and pace of regional social-economic progress.
In this regard it should be noted that the main features of regional budgets characterize its potential, which acts as a tool for general regional regulation, the effective use of which is largely dependent on how high level of objectivity and reality in
general scientific (methodological) and applicable (methodical) meaning has been
put in the grounds of individual regional budget. In our opinion, this level may be
guaranteed only by regional budget management system, which focuses it on implementation of set functions compliant with current status of the region, thus
creating conditions for effective implementation of the complete complex of regional responsibility in budgetary and general financial spheres [1, pp. 90, 95-101].
Accordingly, the author proposes to use the following meaning of the definition
"regional budget management": the regional budget management (a budget of
a region of the country) is a complex process of objective and reasonable influencing the parameters of the budget in order to ensure the growth of financial potential of the region and the grounds for regional social-economic progress.
These should be complemented with a provision that the comprehensiveness of
the process of regional budget management is a must, its execution determines its
effectiveness as a whole, and is characterized by shaping the relationship between
individual and all stages of the budget process, starting from identifying trends and
forecasting basic social and economic financial performance of the region through
the development and approval of a budget and its actual implementation [1, pp.
107-112].
Each stage of the budgeting process is quite different from the others in content,
which requires, on the one hand, specific methods and forms of administrative
influence, and, on the other hand, the definition of a control diagram common to all
98
Concept of managing regional budgets during transition to sustainable self-development
stages that ensures their mutual coherence and integrity of regional budget processas n a whole.
The objective reasonableness of regional budget management as a whole, and of
each individual subject of managerial system presumes application of modern
methods of researching initial state (input data) of managed items (budget parameters) and, what is most important, of social-political and social-economic environment expressly and indirectly determined by this state.
The maximally possible exclusion of subjectivity from management as a whole is
a fundamental problem of the management, and in case of managing such complex
objects as a regional budget that is systemic and inclusive of a large variety of elements (subsystems), including social by its nature, solving the problem involves
special challenges. Among the ways to solve this problem is a comprehensive use of
a variety of methods for modeling systems and their integral parts thus making
possible testing a large number of options for project management solutions before
their final approval in order to identify the most effective ones [13, pp. 30-41].
At the most general level, the parameters of regional budget which are the
managed objects are budget revenues, expenditures and deficit, rational relationship
between which determines not only the quality of the budget, but its effectiveness
as a regulator of the regional development. This perception of managed objects
reflects only the first stage of its structuring and, therefore, may be put only into the
basis of the previous stage of project design being guided by decisions of normative
nature characterized by a predominance of demand over the possibilities to satisfy
it. To ensure realistic managerial decisions, these objects should be subjected to
further differentiation. In this case the degree of differentiation must meet the one
required for maximum efficiency and objectivity of the object representation [13,
pp. 20-28].
Critical for quality of regional management (integral elements of regional budget)
is clear definition of the purpose of managerial actions since, having accurately
defined the scope of required changes in an object progress, one may find out its
condition these changes might ensure. In this case, one should consider that the
condition of integral elements of regional budget recognized as managed objects
Olena K. Yakymchuk
99
and assessment of general budget condition cannot be considered a final objective
according to these traditional parameters. This statement of the author is based on
the fact that finance and financial resources are not self-sufficient items but are
separate elements of the total economic system which mission is to create the most
favorable conditions for economic growth, saying other words, which correspondingly reflect a subsystem for ensuring trade relations (finance as monetary relations)
and exchange proportions (financial resources).
At the same time, since a feedback is quite obvious whereby the structure and
condition of financial subsystem and its integral part, the budget, are derived from
the structure and condition of the economic basis, its own goals should be also
taken into account for solving a problem of task orientation of the described process. The above suggests that determining the task of regional budget management
should be based on its multi-component and hierarchical nature. The highest level
of the hierarchy of regional budget management, according to our perceptions,
should be assigned to regional economic growth, fixing thereby a domination of the
economy over finances and highlighting the economic base as the main factor of
the regional welfare [7].
The next level of hierarchy should be attributed to financial potential of the region, which is the most important but not the only determinant of regional economic potential. However, since according to interpretation of the notion "financial
potential of the region", which is essentially categorical in nature, it is impossible to
evaluate it in terms of quantity and cost. Therefore, in order to meet a system-wide
requirement to established goal according to quantitative criteria and hierarchy of
objectives, it is necessary to develop and introduce such local managerial goals as
financial efficiency of regional budget, its economic and social performance, and the
balance.
We consider that due to the novelty of some concepts, as well as due to the need
for development of methodological provisions for quantitative assessment, it should
be reasonable to articulate them in more details [16, pp. 20-25].
The above concepts have dynamic nature and are used in accordance with the
provision that certain period of time, different for each case, must be between fi-
100
Concept of managing regional budgets during transition to sustainable self-development
nancing (allocation of monetary resources for satisfaction of any needs) period and
the outcome of this financing.
In case of budgetary financing, it is reasonable to take one year as a reference period for evaluation of its performance. This period of time should be taken, first of
all, due to legislatively established discreteness of budget process and, secondly, due
to the fact that some results (at least transitional) of financing of almost every item,
except for long-term investments of capital nature, become partially known at this
distance of time. In this case, it is meaningful to assume that the most important
basis for budget revenues in the next year shall be reasonableness and appropriateness of budget expenditures which return to the budget as revenues of different
kinds through natural (economic) circulation of money [15, pp. 35-40].
This methodological statement is based on the concept of "financial efficiency of
regional budget” which means the ratio of the amount of expenditures in basic year
to revenues of the budget in the reference year adjusted for inflation and calculated
without significant methodological difficulties using available statistical data.
The economic efficiency of regional budget should be interpreted as a ratio of
revenues in the basic year to economic results of these costs. Please note that considering the economic performance of the region, it is difficult to distinguish the
part driven mostly by budgetary expenses, since the process of forming these parameters is quite complicated and influenced by many factors, some of which, according to the author, the most important in some cases, are the structure and
scope of financing the economy from the regional budget [17, pp. 60-68].
In addition, the course of development of the national economy is indirectly influenced not solely by the economic budgetary expenses (e.g. financing of specific
industries, specific investment programs), but almost all other items of the budget
(funding of such spheres as law enforcement that is far away from the industry but
may significantly influence a degree of easiness of making business in the country).
All this allows us to propose the most common methodological approach to
construction of quantitatively measurable indicators of economic efficiency of the
budget, according to which all budget expenditures of the basic year and such macroeconomic indicators of the regional economy as gross regional product or pro-
Olena K. Yakymchuk
101
portion of national income created in the region should be taken into account.
Despite of certain conditionality of the relationship between these indicators acting
as fiscal performance characteristics, it should be recognized that the dynamic analysis of the ratio of budgetary expenditures to macroeconomic result of the regional
economy, combined with the study of the dynamics of individual cost items of
regional budget and macroeconomic parameters of the regional economy, may give
a clear understanding of the economic significance of the budget items.
More methodologically difficult is a problem of quantitative measurement of social efficiency of regional budget, which, in our opinion, means the ratio of the
scope of budget expenditures to changes in the level of social protection in the
region. The complexity of the issue is characterized by the fact that, first of all, as it
was in case of economic expenditures, not only social spending of the budget determines the state of social services in the region, since the majority of the population`s social problems are solved at account of personal income earned as a result
of labor (economic) activities and, secondly, a generalized quantitative assessment
of the characteristics of social protection in many cases is not feasible because of
impossibility of bringing together its individual parameters [1, pp. 94-95].
To solve the problem of quantitative evaluation of social efficiency of regional
budget, according to our beliefs, we should take into consideration all budgetary
implications of the basic year, and for measurement of the level of social protection
we can use the generalized expert opinions with respect to this characteristic change
over time within the reference period, at the same time making parallel analysis of
the dynamics of budgetary costs in respect of economic efficiency.
The most important from the standpoint of methodological approach to defining social efficiency is an indicator of budget balance, which means the ratio of
budgetary revenues to spending without taking into account the sources to cover
the budget deficit (or surplus) or a portion of the deficit (surplus) in budget expenditures (revenues).
We believe that this methodological approach provides an overall assessment of
the balance, which may be supplemented by the level of its own balance, which is
the ratio of budget revenues to budget spending. This indicator, which content is
102
Concept of managing regional budgets during transition to sustainable self-development
based on the fact that inter-budget transfers are an implicit form of financing the
budget deficit, in some cases may give the most objective assessment of regional
finance.
At the same time, considering the control over regional budget balance as part of
general management process and, consequently, the balance as a local managerial
objective, it should be noted that the pursuit of maximum convergence of budget
revenues and expenditures is not always justified and not always leads to growth of
the financial and economic regional capacity, especially under concept of primacy of
revenues and subordinacy of expenditures.
The presence and size of the budget deficit itself are not negative indicators and
should be considered from the standpoint of capacity of its sources to cover it. If
the region is capable of covering a deficit at the expense of real sources, such as
objectively reasonable and well-resourced loans that do not require using bulk and
regular refinancing , the deficit should be recognized not only acceptable, but necessary means for stimulating regional development. Please note that from this perspective, the most effective means of managing the balance of regional budget are
considered target investment loans, which presume that they should be repaid at
account of revenues derived from particular measures being a subject to capital
investment.
In our view, in contrast to the financial effectiveness of regional budget, in economic and social applications, solving the problem of the budget balance as a local
objective of the regional budget, we should recognize a certain size of budget deficit
reasonable (positively influencing the financial and economic potential of the region).
The general chart of the regional budget management that clearly represents the
relationship between the objects of management (revenues, expenditures and budget deficit), and its objectives (local, general and global) is shown in fig. 1.
103
Olena K. Yakymchuk
Fig. 1. The general chart of regional budget management under conditions
of sustainable self-development
The growth of regional
Global objective of managing regional potential
economic potential
General purpose of regional
potential management
Local
managerial
objectives
Financial
performance
of budget
The growth of the regional
financial potential
Economic
performance
of budget
Social performance of budget
Budget Balance
Regional budget
Objects of management
Managerial function
Revenues
Planning
Source: developed by the author.
Expenditures
Organization
Motivation
Deficit
Monitor
104
Concept of managing regional budgets during transition to sustainable self-development
An important feature of this chart is inclusion of management function block,
i.e. those regular operations which implementation will let focus the whole process
of management on using the achievements of modern management and to ensure
maximum orderliness and efficiency of the budget execution under condition of
feasibility of the actual results stipulated by a plan.
The execution of each of managerial functions of such specific object as regional
budget may be a separate area of a special study focused on narrowing the detailed
analysis and formation of methodical provisions for only one, in our view, key and
defining task of management in general - functioning of the regional budget [17, pp.
60-68].
It is important to draw attention to the fact that the process of budget planning
should clearly distinguish between the fundamental features of such management
objects as revenues, expenditures, and budget deficit. For each of these managed
objects, it is necessary to develop its own management system based on systemic
and structural analysis of the object, which makes it possible to take into account its
characteristics and trends, to identify fundamental approaches and methodological
solutions for planning-and-calculation justification of required development model.
The main methodological feature of the regional budget planning is fundamental
integrity of the process based on the fact that all managed objects are, in fact, closely related, and the estimated parameters of one of them are planned restrictions for
others, thereby requiring interactive solution of common problem for managed
objects [6, pp. 60-62].
Findings of the study. Our study comes up with the following conclusions:
1.
Depending on the set objective, designing a concept of regional budget
management requires considering rational interaction between the regional economy and finance and relevance of targeted development of regional budget with
different levels of combination of national and regional budget and management
systems.
2.
It is necessary to design the concept of optimal regional budget manage-
ment with due regard to achieving a balance between budget revenues and expenditures, to establishment of interaction between the groups of functions of revenue
Olena K. Yakymchuk
105
management in the regional budget, to improvement of feasibility of decisions taken
to comply with the economic and financial relations between the structure and size
of budget revenues.
3.
Creating an effective revenue planning system in the regional budget which
links the optimal size of budget revenues to economic and social development of
the region, enable using the existing preconditions for rationalizing the cost management system of the regional budget and establishing the strategic costs planning
system in task-oriented manner.
4.
It is advisable to pay attention to application of the group of principles of
transparency to forming the regional budget, which is an instrument of regulation of
rates and directions of social-economic progress, to conducting step-by-step analysis of the budget execution, to introduction of program-oriented approach to regional budget management.
5.
Successful implementation of conceptual provisions for rational manage-
ment of the regional budget, which is a complex process of objective and reasonable influence on budget parameters in order to ensure financial potential growth
may be assessed by rational balance of budget revenues and expenditures, economic
growth and financial capacity of the region, financial performance, the level of
economic and social efficiency, and the balance of regional budget.
6.
For a long time regional budgets, especially at the community level, have not
been given appropriate attention. Due to decentralization of authorities in Ukraine,
the issue of development and executing of regional budgets comes to the fore.
List of References
[1] Budget Code of Ukraine. Code of Ukraine of July 8, 2010 No. 2456-VІ. The
official text amended and supplemented as of January 30, 2014. Ministry of
Justice of Ukraine, No. 2/2014.
[2] Bila O.H., Finansove planuvannia i prohnozuvannia: [teach. manual] Bila O.H. –
Lviv: Kompakt-LV, 2007.
[3] Blank I.A., Fundamentals of Finance Management. 2 volumes / I.A.Blank. - [3-rd
edition amended and supplemented]. – М. : OMEHA -L, 2011. – Volume 1.
106
Concept of managing regional budgets during transition to sustainable self-development
[4] Beliaev O.O., Derzhava i perekhidna ekonomika: mekhanizm vzaiemodii / Beliaev О.О., Belelov А.S., Komiakov О.М. – К. : KNEU, 2003.
[5] Zapatrina I.V., Biudzhet rozvytku u konteksti zabezpechennia ekonomichnoho zrostannia
// Ekonomika i prohnozuvannia. – 2007. - No. 3.
[6] The Law of Ukraine “On local governments in Ukraine” (Regulatory documents with last amendments as of September 22, 2011). Sumy. – TOV “VIP
NOTIS”, 2011.
[7] The Law of Ukraine “On a national program to promote small business in
Ukraine” – [Electronic source]. – Access address: http://zakonl.rada.gov.ua
[8] Lahutin V.D., Biudzhetna systema ta monetarna polityka: koordynatsiia v transformatsiinii ekonomitsi / V.D. Lahutin. – K.: KNTEU , 2007.
[9] Lunina I.O., Derzhavni finansy ta reformuvannia mizhbiudzhetnykh vidnosyn / I.O.
Lunina. – K. : Naukova dumka: NAS of Ukraine, 2006.
[10] Liovochkin S.V., Makrofinansova stabilizatsiia v Ukraini u konteksti ekonomichnoho
zrostannia / S.V. Lovochkin. – K.: Nasha kultura i nauka, 2003.
[11] Malyi I.I., Derzhava i rynok: filosofiia vzaiemodii / Malyi I.I., Dyba M.I., Halaburda
M.K. – К.: KNEU, 2005.
[12] Myrhorodska L.O., Finansovi systemy zarubizhnykh krain / L.O. Myrhorodska. –
K.: Tsentr navch. literatury, 2003.
[13] Metodychnyi posibnyk shchodo formuvannia proektu Stratehii zbalansovanoho rozvytku terytorialnoi hromady. – V.: Vinnytsia Regional Council, FOP
“Korzun D.I.”, 2012.
[14] Oparin V.M., Finansova systema Ukrainy (teoretyko-metodolohichni aspekty)/
V.M.Oparin. – К.: KNEU, 2005.
[15] Program of economic and social development of Vinnytsia region for FY2013.
– V: Vinnytsia Regional Council, FOP “Korzun D.I”, 2012.
[16] Propozytsiia shchodo doslidzhennia vprovadzhennia polityky samorehuliuvannia (Analiz polityky samorehuliuvannia: propozytsii spilky kryzysmenedzheriv Ukrainy ta Instytutu analizu polityky ta stratehii). – К., 2010.
[17] Paryzhak N., Finansova systema v Ukraini: otsinka i napriamy reformuvannia (N.
Paryzhak). Svit finansiv. – 2010. No. 4 (21).
Olena K. Yakymchuk
107
[18] Katkalo V.S., Resursnaia kontseptsiia stratehycheskoho upravlenyia: henezys osnovnykh
idei y poniatyi /V.S. Katkalo // Vestnyk S.- Peterb. Un-ta. Series 8. – 2002. – Issue 4 (No. 32).
[19] Iasin E., Bremia hosudarstva i ekonomycheskaia polytyka (lyberalnaia alternatyva)/
E. Iasin// Voprosy ekonomiky. – 2002. - No. 11.
[20] Official site of the Ministry of Finance of Ukraine. – [Electronic source]. –
Access Address: http://minfin.gov.ua
Summary
Key words: regional budget, management concept, self-development, regional potential, financial
policy, budgetary financing, budgetary expenditures
In the article the author giv a theoretical justification for the concept of sustainable self-governed
regional budget.
Koncepcja zarządzania budżetem regionalnym
w warunkach przejścia do zrównoważonego rozwoju
Streszczenie
Słowa kluczowe: budżet regionalny, koncepcja zarządzania, samorozwój, polityka finansowa
W artykule autorka przedstawia teoretyczne podstawy koncepcji zrównoważonego budżetu regionalnego.
ZESZYTY NAUKOWE
Uczelni Warszawskiej im. Marii Skłodowskiej-Curie
Nr 4 (46) / 2014
Zdzisław Sirojć
The Maria Skłodowska-Curie Warsaw Academy
The University of Management in Warsaw
Social capital management in the contemporary city
The concept of social capital is a stage of development and requires further improvement, especially to major cities and regions. Their achievement is to draw
attention to the importance of social aspects of management in today’s globalizing
world.
The aim of the study is to present some aspects of social capital management in a
great city.
The notions of the great city and the process of metropolization
Major cities are playing an increasingly important role in today’s globalized society. This notion denotes the individual settlements with relatively large populations,
occupying a considerable territory, with a metropolitan building, diversified economy, wealth of social institutions and a multifunctional urban system. Great cities or
large urban agglomerations have more than 500 thousands residents (Z. Sirojć 2012,
79-80, 199).
Globalization of economy brings about metropolization process, with some
cities taking over managerial functions in the management of post-industrial economy on an international scale.
The essence of the process of metropolization consists in the intensity of relationship between the metropolis and the region, which weakens in favor of cooperation and competition between cities (B. Jałowiecki 2002, 36).
110
Social capital management in the contemporary city
The United States, Japan and the European Union metropolies /metropoleis/
dominate in today’s world. The most important global centers are New York, London and Tokyo.
The notion and the structure of the social capital
Until now notion of social capital was not been clearly identified. Scientists who
created the basis of the concepts (P. Bourdieu, J.C. Coleman, F. Fukuyama, R.D.
Putnam), attempted to describe it in different ways (tab. 1.).
Table 1. The notion of social capital according to the creators of the concept
Names
of the authors
The definition of social capital
Social capital as the sum of current potential resources, con-
P. Bourdieu
nected with group membership, giving its members specific
support
Social capital as the ability of interhuman cooperation within
J.C. Coleman
group framework and an organization for the purpose of realization of common interests
F. Fukuyama
R.D. Putnam
Social capital as the ability resulting from the spread of confidence within society or its parts
Social capital as confidence, norms and connections facilitating
cooperation in achieving mutual benefits
Source: elaboration on the basis of E. Rak - Młynarska (2004), Kapitał społeczny, in: B. Szlachta (ed.),
Słownik społeczny, WAM, Kraków, p. 497-504; see also: P. Bourdieu, C.D.J. Wacquant (2000), Zaproszenie
do socjologii refleksyjnej, Warszawa, J.C. Coleman (1988), Social capital in thecreation of human capital, “American
Journal of Sociology” No 94, F. Fukuyama (1997), Zaufanie. Kapitał społeczny a droga do dobrobytu, Warszawa-Wrocław, R.D. Putnam (2000), Bowling alone: the collapse and revival of American community, New York.
In the article we use the following definition of social capital: it is standards, customs, relations and organizational solutions representing a specific value and helping to connect people in order to perform actions for common good.
The analysis of the idea of social capital allows to distinguish its major components (tab. 2.). In deliberations we accepted classification of elements structure of
social capital adopted by the World Bank.
111
Zdzisław Sirojć
Table 2. The components of the social capital according to the World Bank
Macrocomponents
Structural
Regulatory
Behavioral
of the social capital
capital
capital
capital
Social nets
Social norms
Indications
of coopera-
Social groups
Social structures
Examples
tion
of realization
and mutual
of values and
activities
business
Microcomponents
Social institutions
of social capital
Indications
Confidence
Inquiry channel
of collective activi-
Solidarity
ties
Customs
Exchange
and manners
of the
Acquaintances
information
Source: Ch. Grootaert, T. van Bastelaer (ed.) (2002), Understanding and measuring social capital. Amultidisciplinary tool for practicioners, The World Bank, Washington, DC, behind: M. Theiss, Operacjonalizacja kapitału
społecznego w badaniach empirycznych, in: H. Januszek (ed.) (2005), Kapitał społeczny we wspólnotach, Wyd. AE,
Poznań.
Management of social capital and its basic stages
Managing social capital is a purposeful activity of people in authority. Its development and use of intangible resources for common good.
In the process of social capital management, we can distinguish the following
stages:

construction of development strategy,

identification of resources,
112
Social capital management in the contemporary city

evaluation of resources,

creation of conditions of for development,

monitoring of management process and development of resources,

estimation of results.
Managing social capital may occur in different places: group, organization, region
or State.
Some aspects of social capital management in the contemporary great city
Some of many locations used in social capital management are the great cities.
Management of social capital in a great city is determined by the following factors:

aims of management satisfaction of social needs and development of the
city,

complexity of a large city structure,

varieties of connections and the nature of the interactions,

way of appointing management,

frequent lack of professional management and low quality of administration,

lack of precise criteria of assessing management of a great city.
In our opinion, realization of various elements of social capital management in
the city should consider the following recommendations:

it is desirable to couduct continuous evaluation,

it is necessary to create conditions for development of resources,

constant monitoring of the processes is needed,

recommended to evaluate results of management.
How to assess the management of social capital in a large city?
Up to now there has not been a scientific discourse on this subject. We are conscius of the fact that the paper is not descriptive enough. Still we try to suggest a
few effective criteria for its evaluation:

improverul at the level and quality of life of residents,

socio-political activity of the people,

the degree of involvement of the city in the process of globalization,

the degree of satisfaction of residence in the city, etc.
Zdzisław Sirojć
113
For each of these criteria specific indicators of measurement can be assigned, for
example:

involvement in the process of globalization can include: increased number
of connections with other centers, the number of offices and representative
offices of companies, social organizations and other cities and regions, the
number of international companies and cultural institutions, etc.,

improvement of the level and quality of life we can use such indicators as
the increase of Gross Metropolitan Product, the growth in the city per capita,
an increase of City Development Index, which contains increase of the city
product, improvement of the quality of infrastructure, extension of life, increase of the degree of scholarization and waste disposal,

to measure the degree of satisfaction of residents in the city the standard
tests of public opinion can be used,

to measure the growth of socio-political activity we can be use percentage in
elections, especially in local elections, participation in religious life, formation of new businesses and social organizations, etc.
Above mentioded suggestions do not cover the whole range of instruments that
are used today in social sciences, and can be used in the study of social capital
management of the great city. It is also necessary to continue finding effective solutions, showing the nature and importance of how modern metropolies /metropoleis/
work.
References
[1] Bourdieu P., Wacquant, C.D.J. (2000). Zaproszenie do socjologii refleksyjnej, Warszawa.
[2] Coleman J.C. (1988). Social capital in the creation of human capital, “American Journal of Sociology”, No 94.
[3] Fukuyama F. (1997). Zaufanie. Kapitał społeczny a droga do dobrobytu, WarszawaWrocław.
114
Social capital management in the contemporary city
[4] Grootaert Ch. and van Bastelaer, T. (ed.) (2002). Understanding and measuring
social capital. A multidisciplinary tool for practicioners, The World Bank, Washington
D.C.
[5] Jałowiecki B. (2002). Zarządzanie rozwojem aglomeracji miejskich, WSFiZ, Białystok.
[6] Januszek H. /ed. / (2005). Kapitał społeczny we wspólnotach, Wyd. AE, Poznań.
[7] Putnam R.D. (2000). Bowling alone: the collapse and revival of American community,
New York.
[8] Sirojć Z. (2012). Społeczne problemy rozwoju metropolii / Socialne problemy rozvoja
metropol, Prešov.
[9] Szlachta B. /ed. / (2004). Słownik społeczny, Wyd. WAM, Kraków.
Summary
Key words: great city, metropolization, social capital, the structure of social capital, social capital
management in the great contemporary city
The aim of this article is presentation of some aspects of the great city management. The author
makesa point that we should take into consideration in social capital management in a great city
the following:

construction of development strategy,

identification of resources,

evaluation of resources,

creation of conditions for development,

monitoring of management process and development of resources,

estimation of results.
Zdzisław Sirojć
115
Zarządzanie kapitałem społecznym we współczesnym mieście
Streszczenie
Słowa kluczowe: wielkie miasto, metropolizacja, kapitał społeczny, struktura kapitału
społecznego, zarządzanie kapitałem społecznym współczesnego wielkiego miasta
Celem artykułu jest prezentacja niektórych aspektów zarządzania kapitałem społecznym w
wielkim mieście. Autor wyróżnia następujące etapy postępowania i proponuje ich stosowanie:

konstruowanie strategii rozwoju,

identyfikacja zasobów,

ewaluacja zasobów,

tworzenie warunków rozwoju,

monitorowanie procesu zarządzania rozwojem zasobów,

ocena rezultatów.
ZESZYTY NAUKOWE
Uczelni Warszawskiej im. Marii Skłodowskiej-Curie
Nr 4 (46) / 2014
Tatyana Krotova
Kyiv University named after Borys Grinchenko, Ukraine
Evolution of model.
The origins of simulation in design
Introduction
Reduced model – a copy of the planned object, was used for a long time to test
aesthetic and structural solutions, to search the appearance and perfect design of
structures and objects. Reproduced in a smaller size, keeping current proportions,
the model conveys the image and character of subject. Just one of the key functions
of the model was demonstration of the future creation. Considering modeling as a
kind of objective creativity, another important function should be mentioned –
construction of models of real objects for detailed study of tectonics, structural
features, proportional ratio of the object. Thus, the process of model constructing
originally was not an end in itself, but it appeared as a result of the solution need of
applications number. Made in clay, wax, wood, bone, stone or metal, a model allowed to reproduce the volume and plasticity, identify the spatial properties of
depicted objects and figures by means of light and shade gradations.
Simulation study is in scientific interest and completes our understanding of the
evolution of morphogenesis, technological development, the principles of artistic
design, composition and aesthetics in terms of its influence on the design development. In our time the necessity of cultural and historical background modeling,
roots of the modern concept of modeling project teaching methods has come to
clarity. The purpose of this article is to analyze the model as a formal- sense, the
118
Evolution of model. The origins of simulation in design
sign corresponds to the image of the future facility, transmitting its aesthetic,
structural, technological, proportional, artistic and meaningful characteristics. The
tasks of the article: Examine the premises of model origin as the phenomenon in
ancient cultural and social activities; characterize the functional accessory of ancient
models, identify those routes in ancient modeling, which played an important role in
development of modern project modeling.
Analysis of the source of ancient models appearance having the figurative
conceptual link with its prototype
Due to structural and semiotic relationship between model and conceived
shaped object, modeling became universal tool for study, as in art, applied fields as
in science. Simulation in the broad sense – the creative idea fixed in a tangible form.
Model (French Model – ‘measure’)– a sample of a product, a simplified description
of the real object, the model instance, the scheme, mockup of anything (Belova,
2010). Case modeling has a long tradition. This is confirmed by numerous finding
of ancient models in Egypt, Mesopotamia, Greece, as well as on the territory of
Ukraine.
Simulation in ancient Egypt
In ancient Egypt models of ships and various buildings were produced. According to the beliefs of the Egyptians, before you get into the realm of Osiris – the
realm of the dead, man's soul must cross the river. In connection with this, in the
tomb of Tutankhamun (Treasures of Tutankhamun's tomb) the models of ships,
which have an altar with the dead and figures of servants on the sides, were found
(Figure 1). These amazing, well-preserved models were created by skilled craftsmen:
made of wood, with a relevant ship's rigging, painted in colors corresponding to the
real-world counterparts – they can be used to judge the real ships of Ancient Egypt
at the time.
At the end of the 3ed century B.C. in the tombs, instead of bas-reliefs, wooden
models of breweries, bakeries, butcheries, granaries, with painted figurines of brewers and various servants appeared (Treasures of Tutankhamun's tomb). In one of
Tatyana Krotova
119
the tombs this tiny brewery was found (Figure 2). Thus, the dead man was equipped
with all necessary things for the afterlife. Artistic level is characterized by the originality and spontaneity, transmission of parts are simplified, emphasizing symbolic
conditional interpretation. Craftsmanship indicates the existence of developed art
and craft centers. Being not the exact copies of these structures, objects and vessels,
models of the Ancient World still convey their overall design and a variety of elements that allows us to get a broad view of that-time lifestyle and subject forms.
A unique exhibit, among the others, found in Tutankhamun's tomb, is his wooden statue 76.5 cm high (figure 3). The fact that the height of the Pharaoh statue
coresponds to the height of his torso in full size (but without arms and legs), gave
scientists the ground to think that it was used for garments fitting. Head decorated
with a headdress with golden cobra – uraeus, the face painted in dark red color (it
was used by sculptors for skin men coloring). Thus, the ancient Egyptians put into
use a mannequin, thereby laying the foundations of professional clothes modeling.
Obviously, the person endowed with absolute power could not have a daily tryingon exercise of countless diverse of clothing manufacturer, which was time –
consuming. In addition, craftsmen were not allowed to touch the sacred body of the
ruler and had to provide ready-made clothes. These tasks of exact clothing matching to body proportions and fitting the figure were brilliantly solved by wooden
torso – a replica of the pharaoh body.
Figure 1. The model of Tutankhamun's boat, Egyptian National Museum,
Cairo
120
Evolution of model. The origins of simulation in design
Figure 2. The model of brewery, Egyptian National Museum, Cairo
Figure 3. Tutankhamun's wooden statue,
Primering and coloring, Egyptian National
Museum, Cairo
Subject ‒ shaped modeling in Tripolye culture
Ceramic models of temples and houses, the internal filling and housewares were
found as a result of Tripolye culture on the territory of present-day Ukraine. Tripolye culture – archaeological culture, extended in the 5rd-3rd century B.C. on the
Tatyana Krotova
121
Danube and Dnieper rivers, near Kiev. It was based on Neolithic tribes who were
the creators of highly developed agriculture and social relations. Sedentary life was
favorable for flourishing pottery, the samples of which exhibit a high level of subject – shaped simulation. Not being engineering – construction models, they often
reveal more subtle processes associated with the cult of ancestors to achieve common prosperity in the house and farm.
In excavations of Tripolye culture, about two dozen of these models and more
than fifty pieces, depicting in detail appearance of homes as interiors were found. 34
thousand exhibits are presented at the Kiev regional archaeological museum, which
was created for the 100th anniversary of the opening of the Trypolye civilization by
the Ukrainian archaeologist V. Hvojko (Tripolskaya archeological culture of the
Kiev region). A large group of Ukrainian and Russian scientists were engaged in
large-scale excavations, they analyzed and reconstructed different material forms of
this culture. Studies of K. Zinkovskyi indicate that the dwellings model are a clay
bedding with remnants of house slabs with vertical development. Each layer of clay
with imprints of wooden structures corresponds to the built slab (Burdo, 2013).
Thus, depending on availability and discoveries of such layers, reconstruction of
homes with interfloor and attic floor in the form of wooden flooring, covered with
a thick layer of clay were made (Figure 4, 5). Thanks to the remains of dwellings
models, the presence of several rooms and a certain group of interior parts on different floors were discovered. Complex horizontal and vertical planning of the
buildings and the presence of variegated implements show different functions of
individual rooms. Finds of models of chairs, armchairs and tables with simple but
ergonomic shape suggest that at the turn of the 5-6th century BC the Tripolyan
people already were able to use not only the technology of ceramic production and
processing of wood, but also paid their attention into the problems of functionality,
ergonomics and plastics of objective form.
Analysis of clay sculpture indicates skills of Tripolian artists to combine together
the real with the mythological. This shape is rich and diverse. Nude females figures
are predominant, occasionally men’s figures can be found, there are images of cattle,
household goods model (Tyunyaev, 2009). Plastic elements often complement the
122
Evolution of model. The origins of simulation in design
pottery, so it’s impossible to share the plastic and paintings. The nature of creativity
reflected specific features of the predominant type of farming – agriculture and
proper essence of outlook with determining the value of grain, land, rain and harvest.
All the sacred meaning of these concepts are depicted in paintings of clay statuettes of female figures (Figure 6), which date from the earliest images – the 4th century BC. On the bellies of these figures we can see a sign ‘Sown field’, which refers
to the most ancient Slavic symbols of abstract type. It has a sacred significance of
origin and development of life. A depicted rhomb is a field, the symbol of manifested life, matter. Images of grains within the field – symbol of vitality recourse.
Thanks to archaeological evidence, we can trace another side of sign symbolism
‘Sown field’– the principle of the four sides of the world. Earth, soil, plowed field
were the woman likening – the heavenly prototype of foremother; sown field, a land
with grain – to woman, ‘who has become pregnant in her womb’. Woman and land
are compared and equalized, based on the idea of fruitfulness, fertility.
Figure 4, 5.
Tripolian
ceramic
models of
Dwellings
Kiev regional
archaeological
museum
Tatyana Krotova
123
Figure 6. Tripolian statuettes with thesigns of sown field
Simulation of the Scythian, and the Sarmatians
We can judge the modeling of the Scythians and Sarmatians – nomadic tribes living in the steppe territories between the Danube and the Don, as well as the Crimean peninsula in Ukraine and the northern Black Sea from 1500 year BC to the 7th
century BC, by clay models of four – wheel covered wagons (History of Technology, History of Sarmatians, Chronology). These models (Figure 7, 8) were found in
the ancient town Panticapaeum, settled in the 1-st half of the 6th century BC (modern Kerch, Russia). Historians talk about seasonal migrations of Sarmatians in such
caravans that served them as housing. Spoked– wheels came later, previously there
were solid wooden wheels. The nomad tent was woven of twigs, top frame served
for the supporting of a felt tent. Sometimes tents were made of skins or cortex.
Four or six-wheel carts were pulled by two – or-three-bull teams.
Another aspect of model or copies of actual prototypes in ancient times acted as
a simulator. Already in the ancient period, figures of soldiers, horse riders, models
of weapons, ships and fortifications were used for situational modeling of tactical
maneuvers and military strategy. Similar experiments with the model are simulation
124
Evolution of model. The origins of simulation in design
of the real behavior of the object. During such situational modeling, the experience
and fair view of the object or phenomenon, like real practice can be gained.
Children's toys modeled studied reality in a game too. In children's burials in
Panticapaeum we can find different toys in the form of various animals. Most often
they were made of baked clay. Below (Figure 9) you can see a table with images of
such toys (Tsvetaeva). In the first centuries AD, terracotta toys with wheels – wagons, gobies, doggies, appeared.
Figure 7.
Model of the Scythians carriage
4-3th century BC
Historical-Archeological Museum
Kerch, Crimea
Figure 8.
Scythians carriage (clay model)
4-3th century BC
Historical-Archeological Museum
Kerch, Crimea
Tatyana Krotova
125
Figure 9.
Children’s toys
Lower line – 6-4th c. BC
Middle line – 1-2th c. AD
Upper line – 2-3th c. AD
Panticapaeum
Now Kerch, Crimea
Methods of simulation application in the Byzantine Empire
and the Kievan Rus
Sign-symbolic function is visible in the model of temples in the pictures of interior decoration of sacred architecture of the Byzantine period, and then, according
to Byzantine traditions,the art of the Kievan Rus. This image – a mosaic canvas
(Figure 10) – decorates one of the main entrances to the main cathedral of the
Byzantine Empire – the Saint Sophia in Constantinople, now Istanbul, Turkey (mosaics of St. Sophia Cathedral in Constantinople). In the hands of Constantine the
Great – the founder of the empire, shown at the right hand of the Mother of God,
– model of the city Constantinople, which was built by his order in the middle of 4th
century BC. Moving on to the present day, the version of the temple was built by
Emperor Justinian (to the left of the Mother of God) in the middle of the 6th century – this model we can see in his hands. The mosaic dates back to about 950 year,
126
Evolution of model. The origins of simulation in design
made during the reign of one of the subsequent emperors of the 6th century – model
of Constantine Porphyrogenitus. The mosaic creating was aimed to demonstrate the
unity of the imperial power and the church. Constantine and Justinian bring their
gifts to the Main Lady of that teample. Simplified composition, severe and symmetrical, displays an abstract idea: carrying the mission equal to the apostles on earth,
the emperors appeared before Virgin Mary as the perfect protection of the city and
the temple, which they created, praying to the Lady Queen of Heaven and Constantinople for protection.
Traditions of the Byzantine Christian world, which found their historical continuation in the Kievan Rus time, were embodied in models of ancient buildings
during the reign of Yaroslav the Wise, who turned the Kievan Rus in the middle of
the 11th century into a powerful state. Prince Yaroslav planned to carry out the
architectural projects in Kiev, such as those masterpieces, which made Constantinople famous. Among them – the Hagia Sophia cathedral. Sofia of Constantinople
was as a model for it – the main shrine of the Orthodox world. So, from Byzantium
to Rus, together with Christianity, the image of Sophia was transferred–the Wisdom
of God, uniting the beauty and wisdom – an image embodied in the majestic cathedrals, icons and frescoes. Chroniclers called Yaroslav the disciple of St. Sophia, and
therefore he is called the Wise in the history.
Outstanding Russian artist Nicholas Roerich, who visited Saint Sophia Cathedral,
created a series of paintings, including the painting ‘Yaroslav Rejoiced at the View of
Kiev City’ (Roerich, 1938). In it (Figure 11), the time when architects and artists
submitted for approval to Prince Yaroslav the sketches of Kiev future masterpieces
is displayed. In the hands of the first master – the sketch of the thirteen-domed
Hagia Sophia Cathedral, the second holds the future architectural model of Kiev
shrines. In the hands of the third master- iconographer we can see the sketch of
temple painting. The fourth master can be seen too, who also brought for the
Prince approval the invisible to us sketch. We can see in front of us the design
modeling methods of those times. Each of the masters, using the most expressive
way for him – plane or three ‒ dimensional, show the images of conceived structures.
Tatyana Krotova
127
Greek architects, who were invited byYaroslav from Constantinople, did not
copy the Byzantine cathedral. In contrast to the Hagia Sophia, overlapped with one
giant dome, Kiev Cathedral had 13 domes, symbolizing Christ and 12 apostles. On
the unpreserved front fresco of St. Sophia Cathedral, Yaroslav the Wise was depicted ‘who is presenting to the Savior the temple, he built and not just the temple, but
the temple ‒ symbol, temple ‒ image of the Christian city of Kiev, decorated and
glorified by him’ (Vasylkova, 2007). This image of Yaroslav was taken as the basis
for creation of the monument, set in a park near the Golden Gate in 1997, to the
Day of Kyiv (Figure 12). According to the sketch of sculptor Kavaleridze, Prince
Yaroslav the Wise is depicted sitting, the figure is made of bronze, his gaze is directed toward the St. Sophia Cathedral, which he holds in his palms (Monument to
Yaroslav the Wise in Kiev).
In these examples, the objects of modeling – of models temples – play for the fine art the role of full-scale visualization, and as to the content, these art works (mosaic painting, painting, sculpture) – express the idea of the exaltation of faith
through the governor of the state.
Figure 10.
Constantine the Great and Justinianin
front of the Mother of God on throne in
about 950 year, mosaic canvas in the
cathedral of St. Sophia in Constantinople (now Istanbul, Turkey)
Figure11.
‘Yaroslav Rejoiced at the View of Kiev
City’ (Roerich, 1938)
128
Evolution of model. The origins of simulation in design
Figure 12.
Monument to the Prince Yaroslav the Wise
(according to the sketch of sculptor I.
Kavaleridze)
Model as a source of study of design principles and the objective world
of the past
Careful keeping of historic buildings models over decades, which for some
reasons haven’t survived till our days, are sources of study of modern science of
design and figural world from a certain period in history. Active development of
exhibition models was achieved in 18-19th centuries. By this time, this type of modeling has gained independent aesthetic value, immediately becoming collectible. In
museums and private collections in Europe architecture models, including a unique
model of the medieval St. Paul's Cathedral in London Museum (Figure 13) weer
preserved.
The cathedral was founded in 1087 and its reconstruction was completed till
1280. Like this, as we see it on the model, it existed until the Great Fire of London
in 1666. The medieval St Paul's Cathedral before its destruction was also the largest
in England and the third largest in Europe. Its length was about 180 m; width – 30
m; height of the spire – almost 150m. After the fire, it was decided to build a new
cathedral in its place. This model of the medieval St Paul `s Cathedral is 100 years
old. It was made by the architectural model-maker J. B. Thorp and exhibited at the
White City Exhibition in 1908; the London Museum acquired it in 1912. The in-
Tatyana Krotova
129
scription on the plate in the museum exhibit states that, due to its age, the model is
very fragile.
We have a very good idea of how it looked just before its destruction, for
Wencelaus Hollar in the 1650s engraved a very full and detailed set of its views (the
Old St Paul`s Cathedral). Although the engraving (Figure 14) has its own artistic
value, nevertheless, thanks to the three-dimensional model we can clearly
our concept of English Gothic principles of the 11-13th century.
Figure 13. The model of medieval
St. Paul`s Cathedral in London,
J. B. Thorp, 1908 г.
Photo by T. Krotova, 2013
Figure 14.
For Wencelaus Hollar, 1650,
engraved views
extend
130
Evolution of model. The origins of simulation in design
The systemic nature of professional modeling
Modeling has reached a high level of perfection. This is shown by fragile devices,
collected in museums around the world. Since the beginning of the 16th century
models actualized in different wood species, have been demonstrating excellent
carving work of capitals and ornaments, showing growing desire for perfection
search. From a simple tool model gradually turned into a tool of attraction, a masterpiece of craftsmanship. Models were made not only of wood but also of clay,
wax, and stone.
Development of architectural modeling in the Renaissance
Active use of architectural modeling is becoming common in Italy in the 15th
century. Great architect and theorist Leon Battista Alberti, following the example of
Caesar, who ordered destruction of the house which had been built on his order for
the reason that it did not satisfy him in the finished form, in 1485 he wrote: ‘I’ll
never be tired of recommending what should become a habit for good architects: to
think and rethink the project of construction in all its complexity and at the same
time of every part of it, using not only drawings and sketches, but also models made
of planks or other materials’ (Alberti, 1935). Alberti condemns the use of ‘colored
or painted models for the appeal’ as a sign that an architect is looking for a way to
impress with appearance, thereby distracting the scrutiny of the project. Thus, it is
preferable ‘not to make a model impeccably trimmed, polished and shiny, but plain
and clean, in order to highlight the subtlety of ideas, rather than the subtlety of
execution’. Recommended by Alberti, model was at the center of architectural
creativity. Through model, architect experienced project with materials and volume,
as well as getting his idea to the customer, before hand made in plans and drawings,
as we can see in the picture of D. Cresti‘ Michelangelo presents a layout of St. Peter’s dome to Pope Paul IV’ (Figure 15), (Orlov, 2013).
The Renaissance has a lot of great examples of magnificent completed models,
of wich we present one. After the death of Antonio de Sangallo, Michelangelo
gained his position as architect of St. Peter's cathedral, and began to build his own
model. It is known that Michelangelo often manufactured several models for each
Tatyana Krotova
131
of his projects. In 1557 it allowed Michelangelo to avoid serious mistake that has
been found in the structure of the cross in the south aisle of St. Peter’s cathedral.
This mistake could cause the destruction of the arch. Due to the model, the architect was able to prove that it was a builder’s mistake.
One of the models of St. Peter’s Cathedral by Michelangelo – the models of the
tholobate and dome (Figure 16) are kept in the Vatican Museum. In 1564, when the
architect died, and most of the Cathedral had towered above the ground, a tholobate was built. Construction of the tholobate was directed by Giacomo della Porta,
who made some changes in the model of his distinguished predecessor (Orlov,
2013).
Figure 15.
Michelangelo presents
model of St. Peter’s cathedral dome to Pope Paul II.
Domenico Clubs, CasaBuonarroti Museum,
Florence
Figure 16.
Models of the tholobate
and dome of St. Peter’s
Cathedral in Rome.
Michelangelo, 1558-1561.
Wood, tempera,
5000х4000х2000 mm, Vatican Museum
A. Gaudi model as the basis for calculating the statistical design forces
In the late nineteenth century Spanish architect Antonio Gaudi took, in the development special place of architectural modeling work. In 1898 Eusebio Guell, the
main customer of the great architect, suggests to create a project crypt of Colony
132
Evolution of model. The origins of simulation in design
Guell. A whole research program was completed for projected church, and heigh a
4.5 m was arched model built, suspended from the ceiling of temporary workshop
(Project Development. Crypt model). The surviving pictures recorded meticulously
perform complex suspension structure of loads, ropes and chains at different stages:
from intricate web designs to painted canvas (Figure 17).
The origin of this model harked back to Gaudi belif that the arch has a unique
strength and beauty. In all his previous buildings the arch served simultaneously as a
decoration and design element. But now Gaudi developed his idea further and made
arch it the basis of the whole project. Drilling a set of holes in the ceiling and placing them strictly in a circle, he got the point of attachment for the arch that, sagging
under gravity, formed a graceful garland. If the next layer it arranged on the top
then the original arches will stretch further. The problem was in fastening of the
entire structure by gently tightening the ropes in the bell. As the layers of arches
were approaching the floor, the outer circle tightened to the bottom too. As a result, such unstable model during its construction, turned into a construction of
fantastic strength.
Loads by a complex system were attached to the tops of inverted arches to compensate for the increased load, as well as to outline all the features of the plan,
which differed from the proper circle and complicated things even more by adding
various internal arches and vaults. A. Gaudi and F. Berenguer worked on the model
at every of their visits in the town of Santa Coloma. Other members of the team
spent thousand of hours adding pellets, one by one, in a tiny canvas bags. Colonia
Guell turned into an architectural laboratory, which operated in the method of trails
and errors. Located in an improvised temporary structure among pine forest, it has
become an advanced architectural studio.
After completion of the model photographers captured it on film and printing
the image flipped. Ther eafter A. Gaudi, F. Berenguer and draftsmen used these
photos with a clear structure for the planned construction of design and external
appearance of the church.
This amazing model was not a model of the finished structure, but was the basis
of the design calculations. It gave an opportunity for Gaudí as a civil engineer,as the
Tatyana Krotova
133
result of experiment to develop fully two major components of building construction: a parabolic arch and sloping support. ‘In designed model, he could estimate
corresponding pressure, which arch and columns will have to experience: a careful
calculation of the weight of small lead shot bags on a system of laces in the ratio
1:10 000 allowed to calculate how much weight the arches and columns would carry
(Zerbst, 2009 ).
Gaudí did not prepare project on the drawing board, but studied statistical powers in practice, used the model with accurately calculated correlation of powers.
Ready creation demonstrates the results that have been obtained through the use of
models. In the first place a visitor notices columns and spatial complexity of the
internal volume. The hall consists of giant parabolic arches and sloping walls and
columns. They are sufficient to support the weight of the vaulted ceilings. Throug
hout the construction, there are no two identical elements. No column is like any
other one, just like in nature, there are no two identical tree trunks.
He worked on this small cript for ten years. This period should be seen as a preparatory step towards a large-scale project of the Sagrada Familia Cathedral. Models
made by Gaudi for this construction were quite informative, allowing construction
to continue even after many decades after the death of the architect (Figure18).
Inside the cathedral in the museum, drawings and various Gaudi models are (Sagrada-Familia) exhibited. Thanks to the models, on which he conducted research on
the static loads, it became possible to build such masterpieces of world architecture,
like the Sagrada Familia Cathedral in Barcelona.
Figure 17.
Photo models of laces and sinkers,
served to A. Gaudi to calculate the
force vectors forcing on the structure of
the building in the colony Guell crypt
134
Evolution of model. The origins of simulation in design
Figure 18.
Models made by Gaudi, the
Sagrada
Familia
Cathedral,
Barcelona, Spain
Today, as thousands years ago, despite the great achievement of 3D graphics,
models and mock-ups are the key elements in the search and verification of design
decisions and project presentations. In modern usage, this effective method, proven
in many years of practice, often called not just prototyping but the method of design modeling.
Demand for manufacturing of quality models in the sphere of architecture and
design has led to the creation of professional workshops that specialize exclusively
in the production of models. Such is the Moscow model workshop ‘Study a model’
(makety.ru). Workshop experts – architects, sculptors, engineers – not only provide
the most accurate and careful study of the details of models at an intermediate stage
of the design but also bring works to the level of independent masterpieces. The
model is characterized by creativity, high quality, innovative solutions, use of modern materials and high-quality technologies (Figure 19, 20), (Conceptual layout,
2013). For manufacturing models modern and high-quality equipment is used: 3D
milling-and-engraving machines ‘Rolland’ and ‘Woodpecker’, laser engraving systems ‘Laser Line’, plasma processing materials station, as well as a variety of materials – plastic, wood, plaster, various metals.
Tatyana Krotova
135
Figure 19.
Model 1:150. Materials: steel,
aluminum, tin, brass, acrylic.
Dimensions: 700h 500 mm.
Idea: office building of workshop ‘Studiya model’ supersedes not yet formed island
of greenery in urban environments
Figure 20.
The interior design of the
office
building
‘Studiya-
model’
Modeling is an indispensable method of finding artistic and constructive solutions used in the field of fashion design. As noted above, the history of clothes
modeling originates in times of ancient Egypt. Wooden torss as e modeling tools
were also used at the end of 19th – at the beginning of the 20th century. Thus,
French fashion designer Madeleine Vionnet (1876-1975), without the creative heritage of which it is difficult to imagine women's fashion of the 1930s, ‘molded’ her
dresses on a small dummy half of human growth pinning cloth elsewhere hundreds
of times, trying to achieve a perfect customized fit with a single figure seam (Figure
21).
136
Evolution of model. The origins of simulation in design
Madeleine did not use sketch graphics, but had special spatial reasoning. Her
most famous invention is cut on the bias (at an angle of 45 degrees relative to the
warp), which she used from the second half of 1920s, for the whole item and not
only for individual small parts, as it was before. Madeleine Vionnet was indifferent
to color, but had a passion for form, which she understood as devotion to natural
lines of the female body (Madeleine Vionnet, 2009).
The greatest fashion designer of the XX century Christian Dior is one of the few
who personally described the whole process of creating collections in autobiographical edition ‘Dior o Dior’. In this process, one of the most important steps he singled out as the search for the form of this future item on a mannequin using linen
fabric. Dozens masters for Dior house, received sketches from the chief master,
draped with linen wooden mannequins in multiple searches before understanding
the opportunities of material and begian to form the silhouette. Work with a dummy succeeds only when the style of collection was carefully designed. Demonstration of the created linen forms was were called ‘a solemn day’, ‘ceremonial’.
C. Dior expressed the importance of this day in for following way: ‘Our form
from linen, as drawings that created it, does not resemble dresses in the most general terms yet, it has almost no details, if indicates silhouette, main lines, cut. Before
us is the basis for making template, the fit will form the foundation of the entire
collection. Hints at details like cuffs, bows, pockets, belts will appear later, and only
if they are an important element of the design, they are made immediately. This day
– is crucial for the collection. It allows me to select among the silhouettes that
I imagine five or six base ones on which dresses, suits, coats will be sewn’ (Dior,
2011). In the process of work with linen models, every viability of each form becomes clear: sometimes sketches that looked promising from the first glance, when
realized, became not expressive, sometimes forms became very different from conceived in sketch and we had to look for possible ways of approaching what was
intended. After final decisions have been made, forms were enrolled in a special list.
Evidence of the great importance that was attached to a long painstaking search
of the desired form, silhouette and design modeling, became a part of the exhibition
‘Dior: under the sign of art’, which was held at the Moscow State Museum of Fine
Tatyana Krotova
137
Arts named after A. S. Pushkin in 2011 (Dior: under the sign of art, 2011). This
section was called ‘Atelier’ and was opened with huge white shelves under a canopy
of white transparent fabric (Figure 22). On shelves were placed form-models on
mannequins or prototypes of dresses and costumes, which were considered by
designer before selection of fabrics. On Figure 23 we see a linen mock-up model of
dress collection ‘haute couture’ – Spring/summer – 2007, created for the ‘Dior’
house by John Galliano. ‘Couturier wants not only to find a new way of cutting out
or good sewing, but above all he wants to express himself, and, probably, modeling,
despite its ephemeral nature, is the same language as architecture or art’, – thought
C. Dior.
The model can serve in clothes design not only as search tool of the desired
form, for the purpose of subsequent transfer of found variants into embodiment
and basic fabric, but can also serve the function of adjustment and fixation of forms
depending on the type of figure, being an appropriate blank. Such example is the
mock-up model of without trying method of tailoring of famous Ukrainian designer
Mikhail Voronin (1938-2012), patented in 1970. In the photo (Figure 24) M. Voronin demonstrates this method. Model is a construction without collar and sleeves,
made on templates of the jacket considering manufacturing technology of the future
works. It is used during pre-fitting, combined with taking measurements as a tool
for measuring customers’ figures with different physique. Construction of vest
divided by basic design areas horizontally and vertically with a possibility of its
fitting to the figure. Divided parts connected with measuring tapes, with textile
fasteners or magnets at the end, allowing fixing parts’ position of the measuring
vest. Amount of deflection of measured figure from conditional proportional, defined by centimeter points, marked on the measuring tapes, located on the lines of
horizontal division, as well as at the base of the gorge back, at the waistline and at
the junction of the front parts.
In Figure 25 we see the design of vest-model for the jacket for conditional types
of figures (Voronin, 1985). Based on many years of experience and based on the
analysis of large sample amount of customers’ measurements, Voronin used fifteen
typical sizes of such models, grouped growth and completeness by respectively.
138
Evolution of model. The origins of simulation in design
Vest-mockup method permits the use of customer’ deflection figure rates: body
position, chest width, bulge of shoulder blades, shoulders slant, difference in gorge
tops, convexity (concavity) of chest and back, latitudinal areas on the hips and ballance of the item as whole. Fixed by measuring vest, these deviations and especially
adjusted with accuracy on ready templates, and then during the cutting and manufacturing of the item. Thus, Voronin managed to standardize fitting of the suit, with
no loss of quality of custom-made clothing. In addition, working with the customer
you need only one meeting, which greatly simplies and accelerates the process.
Figure 21.
M. Vionnet during his work.
Second half of 1930
Figure 22.
Shelves on which linen models of future
items of C. Dior on mannequins are
held.
Photos from the exhibition‘ Dior: underthe sign of art’, section ‘Atelier’.
Moscow, State Museum of Fine Artsnamed after A. S. Pushkin, 2011
Tatyana Krotova
Figure 23.
Collection of ‘haute couture’ Spring /
Summer 2007 (linen mock-up model).
Heritage of House Dior, Paris.
Photos from the exhibition ‘Dior: under
the sign of art’, section’ Atelier. Moscow,
State Museumof Fine Arts named after
A. S. Pushkin, 2011
Figure 24.
M. Voronin demonstrates withouttrying method of tailoring with
vest-model (vest-mockup model,
2011)
Figure 25.
Design of vest-model for the jacket
for conditional figure type (Voronin, 1985)
139
140
Evolution of model. The origins of simulation in design
Conclusions
Modern understanding of modeling is not a new form of object-creative activities, such as computer modeling. Even in ancient times, people were well aware that
three-dimensional models, made of wood, wax , bone and metal, are more visually
intuitive than planar drawing, they reproduced much more expressive image of the
object, its constructive and proportional features. Experimenting with them, you
can better understand the object itself, concentrating on complex elements. Models
of architectural structures, ships, machinery in the ancient period were assigned for
sketch and design character, along with the early technical drawings and blueprints.
Another function was developed in ancient times besides sketching and design –
simulator functions. Situational or simulation modeling allowed ‘to play’ the processes, identical to real, on the made models. So with the help of models, the behavior of those objects, which real experiments were expensive,were simulated. Thus,
the historical backgrounds of this art form are subject- shaped object modeling for
religious, military, art, engineering and construction tasks.
Ancient subject-shaped simulation contains the aesthetic and technological base,
it has a typical high art, plastic, composite and shaping level, that is necessary from
the standpoint of modern design. Analyzing the model evolution in the subject art,
we can identify the main directions, which played an important role in the development of modern project simulation:

modeling as creation of prototypes in order to find , test and improve the
artistic and design qualities of the object;

modeling as a way to demonstrate the plan;

modeling as an artistic craft in the creation of religious purpose objects;

model as full-scale visualization in the visual arts, which carries the signsymbolic function;

model as an expression of the object at the sign or code-level.
In general, as to the ancient period, we can speak about the simplified transfer
by models to the real forms of the object, but in spite of this, it should be noted
that there is always stable relationship at the philosophical or functional identification. Ancient models are the bright sources of informational transmission, not only
Tatyana Krotova
141
of the properties and principles of object, but also the historical lifestyle, traditions,
way of life, subject forms of one or another period.
During the Middle Ages and the Renaissance, simulation also allowed to solve
any artistic and engineering problems by constructing and testing models, thereby
replacing technical or drawings documentations. And in the 19-20th centuring modeling equally with graphic design and technical documentation was included in the
conventional order of architectural objects development, environmental design and
clothing.
Model in modern design is one of the means of visualizing ideas, expression of
creative thought of the designer, a method of transmitting information about the
proposed facility. In designing the volume and graphic models complement each
other, and develop the author's intent. Modeling of design-object at all stages of its
development (in scale and full size) occupies an important place in the design process. In this model serves as a design tool, allowing you to check and select the best
options of formative, composition, color, ergonomics, design and other decisions.
In the process of three-dimensional modeling it’s easier for a to designer to get a
holistic view of the shape and structure than in the planar modeling (drawings,
design graphics). Model improving in the work process leads it at the final stage to
the level of the reference sample products.
Being a reflection of the future object or reproduction of already existing, modeling has become a multi-valued interpretation in science, technology, art, and the
simulation process was classified by type, models character and simulated objects,
and also by the areas of simulation use.
Let’s mark out the main directions of modeling in the modern design:

model as an experienced or experimental sample at the intermediate stage of
designing of individual objects, and before starting serial manufacturing;

model as a way of presenting design solution;

model as a part of the situational (scenario) modeling;

model as an independent production;

model as a historical object of study.
142
Evolution of model. The origins of simulation in design
As in ancient times, the object art modeling is an effective tool and method,
which actively stimulates the visualization of creative ideas.
References
[1] Alberti, L.-B. (1935). Ten Books on Architecture. T. I. – M.: Publishing All-Union
Academy of Architecture. – 427p.
[2] Belova, I.L. (2010 ). Project Simulation Paper: Study guide. – Nizhny Novgorod:
VGIPU.
[3] Burdo, N. B. ( 2013 ). K.V. Zinkovskiy researcher Tripoliady / Stratum plus, 2,
15-23 [online]. Available at: http://www.eanthropology.com/Katalog/Arheologia/STM_
DWL_EuOG_dC772bFB9qex.aspx [accessed 16 December 2013].
[4] Development of the project. Layout for the crypt [online]. Available at:
http://gaudi-barselona.ru/buildings/kripta/kripta_91.html [accessed 16 December 2013].
[5] Dior C. (2011). Dior about Dіor: Autobiography (translation E. Kozhevnikova). –
M.: Slovo.
[6] Dior: Under the sign of art (2011) [online]. Available at: http://ziggyibruni.
livejournal.com/66425.html # [accessed 21 December 2013].
[7] History of Technology /‘New Herodotus’ General historical forum [online]. Available at: http://gerodot.ru/viewtopic.php?f=8&t=14388&start=45 [accessed 19
December 2013].
[8] History of Sarmatian, chronology [online]. Available at: http://ciwar.ru/
varvary/sarmaty/istoriya-sarmatov-xronologiya/ [accessed 16 December 2013]
[9] Conceptual layout [online]. Available at: http://www.makety.ru/gallery/11/ [accessed 20 December 2013].
[10] Madeleine Vionnet – fashion purist [online]. Available at: http://wwwlookatme.ru
/flow/posts/fashion-radar/70201-madeleine-vionnet-purist-modyi
[accessed
22 December 2013].
[11] Monument to Yaroslav the Wise in Kiev [online]. Available at: http://travelloon.com
/ru/ukraine/kiev/40 [accessed 16 December 2013].
143
Tatyana Krotova
[12] Mosaics of St. Sophia Cathedral in Constantinople [online]. Available at:
http://library.elitceram.ru/articl/mozaika-sofia-sobor.html[accessed 16 December 2013].
[13] Old
St.
Paul`s
Cathedral
[online].
Available
at:
http://www.explore-
stpaulsnet/oct03/textMM/OldStPaulN.htm [accessed 19 December 2013].
[14] Orlov, V. Kingdom of virtual architecture [online]. Available at:http://
www.makety.com/article/ [accessed 17 December 2013 ]
[15] Roerich N.K. (1938) Yaroslav Rejoiced at the View of Kiev City / Siberian Roerich
Society [online]. Available at: http://sibro.ru/photo/poster /detail /2491 [accessed 19 December 2013].
[16] Sagrada-familia [online]. Available at: http://gaudi-barselona.ru/gallery/fotoSagrada-familia/foto-Sagrada-familia_182.html [accessed 20 December 2013]
[17] Treasures
of
Tutankhamun's
tomb
[online].
Available
at:
http://earth-
chronicles.ru/news/2012-06-18-24984 [accessed 16 December 2013].
[18] Tripolskaya archaeological culture in Kyiv region (V-III century BC) [online]. Available
at: [accessed 16 December 2013].
[19] Tsvetaeva G. A., Children's toys [online]. Available at: http://www.snopro1.ru
/lib/agsp/original/134.html [accessed 16 December 2013].
[20] Tyunyaev A. A. (2009). Slavs – as maternal religious culture for all modern religions /
History of World Civilization (system analysis ) [online]. Available at:
http://www.organizmica.org/archive/307/rp6-5.shtml [accessed 16 December
2013].
[21] Vasylkova N. (2007). Nicholas Roerich and Ancient Russia [online]. Available
at:http://subscribe.ru/archive/culture.arch.roerich/200712/07165755.html/
[accessed 18 December 2013].
[22] Voronin M. L. (1985). Design and manufacture of men's outerwear by unfitting method.
– Kiev: Publishing house «Technology».
[23] Within Ukrainian Fashion Week F/W 2011. Voronin presented a book about his
life / Under UFW F [online]. Available at:
http://ivona.bigmir.net/beauty/news/308056-V-ramkah-UFW-Voroninprezentoval-knigu-o-svoej-zhizni [accessed 25 December 2013].
144
Evolution of model. The origins of simulation in design
[24] Zerbst R. (2009), Gaudi A. Life dedicated to the architecture / transl. from English.
Boris L. – Moscow: Publishing House of ART Rodnik.
Summary
Key words: model, mockup, simulation, design
Since ancient times, a miniature image of the architectural structure, military or economic
mechanism was used to examine the formative and constructive solutions, technical mining. Over
time, the practice of making and utilitarian using of models marked out in a special kind of project
activities – technical layout or design modeling, which is strong demand in architecture, construction,
engineering, design and substantive work.
Diverse religious, design, gaming and exhibition models are examined in the article from the
standpoint of design in the process of its evolutionary development. The formal and substantive
factors are identified, which are inherited by modern design simulation from the ancient practice of
making and using the models. Design objects modeling, using the plastic means, today takes place
in a creative search of every designer in various fields. The author concentrates her attention on
identifying the specifics of space ‒ spatial modeling, typical for architectural and fashion design.
Ewolucja modelu. Początki symulacji w projektowaniu
Streszczenie
Słowa kluczowe: model, makieta, symulacja, projektowanie
Autorka koncentruje swoją uwagę na określeniu specyfiki miejsca – modelowaniu przestrzeni
typowej dla architektury i projektowania mody.
ZESZYTY NAUKOWE
Uczelni Warszawskiej im. Marii Skłodowskiej-Curie
Nr 4 (46) / 2014
Zbigniew Wesołowski
Military University of Technology
Application of Computer Simulation for Examining
the Reliability of Real-Time Systems
Definitions of Fundamental Terms
1. A system is a separate part of the real world, whose properties and phenomena occurring in it are under investigation.
2. A structure of the system is a set of elements and interactions between them
conditioned by their belonging to this system.
3. Behaviour of the system is the range of actions made by this system in conjunction with itself or its surroundings, which includes the other systems as well as
physical environment. It is the response of the system to various stimuli or inputs,
whether internal or external ones.
4. A real-time system is a computer system in which calculations are performed
concurrently with the outside process to control, monitor, or timely react to events
that occur in this process .
5. Theory of reliability is a field of the research activity aimed at getting to
know and understanding of crucial factors affecting the reliability of systems and
other in reality of existing structures [1, 3, 10, 18].
6. A reliability of the system is interpreted as its ability for the performance of
tasks in named terms and in determined time intervals.
7. A reliability state of the system is called the smallest set of linearly independent elements enabling the ambiguous evaluation of the ability of the system for
146
Application of Computer Simulation for Examining the Reliability of Real-Time Systems
the performance of tasks in named terms and in the determined time interval. Reliability states are non-measurable.
8. A set of elements and a way of joining them which mapping the influence of
the inability of these elements on the inability of the system is called the system
reliability structure.
9. The smallest set of elements of the system, for which individual measures are
well-known, or which gathering data enabling the estimation of these reliability
measures is possible for, is called the element of the system reliability structure.
10. A process of changes over time of the system reliability states is called the
useprocess of the system.
11. Computer simulation is a method of concluding about behaviour of systems based on data generated by computer programs simulating this behaviour [4,
5, 9, 15].
Let us enter markings:
(
of real numbers,
- the set of natural numbers,
{ }, [
)
,
{ },
]
[
]
- the set
, W(α,β) -
the Weibull distribution with the parameters α (α>0) and β (β>0).
Introduction
Real-time systems are increasingly used in many, often key, areas of human activity. Examples of such systems are control systems in power plants, emergency
backup power systems in railways and telecommunications, air traffic support systems. These systems are characterized by a complex functional and technical structure. In the view of that kind of systems, the requirements concerning their reliability are increased. This results primarily from the potential consequences of their
inability. In respect of such systems it becomes an important issue to develop adequate methods of their reliability analysis.
Methods of reliability theory of systems can be broadly divided into two groups.
The first is the analytical methods allowing to determine the reliability of systems
based on the knowledge concerning the reliability measures of elements of their
reliability structures. These methods have a limited application, because they require
making an assumption that the random variables used to describe the reliability of
147
Zbigniew Wesołowski
the elements have the exponential distributions. Accepting the distributions other
than the exponential distribution makes it extremely difficult to determine the reliability measures of systems. The second group consists of statistical methods allowing for estimating the values of reliability measures based on the statistical samples.
These methods require gathering the statistical material with high volume. Since the
acquisition of data about the reliability of real-time systems is difficult, and often
impossible [16], one of the ways of examining the reliability of such systems is
a computer simulation.
Thanks to features like the possibility of fast generating a wide variety of data
and the flexibility of planning and performing of a variety of experiments,
a computer simulation has become in recent years a peculiar modus operandi of
many scientific specializations. It takes on particular importance in applications such
as decision-making and war games, virtual reality, virtual nuclear testing, genetic
engineering, astronomy and the study of systems reliability.
The purpose of this article is to discuss the method for reliability analysis of realtime systems based on the data generated by a computer program simulating the use
process of the examined system.
Model of the System Reliability
A reliability model is an approximate description of the time evolution of reliability states of elements that are components of the reliability structure and mapping
the influence of their inability of the inability of the system as a whole. This model
defines the tuple,
(G, D),
(1)
where: G is a model of the system reliability structure, D is a model of the system reliability dynamics.
Definition 1. Consistent directed graph,
G=(V,R,A),
where:

V={s,E,t} is a set of vertices of the graph,
(2)
{
} is a set of
vertices, called operational elements (or elements) of the system reliability structure;

is a set of edges of the graph;
148
Application of Computer Simulation for Examining the Reliability of Real-Time Systems

{
} is a set of reliability states of all elements from
the set V, where:
is a set of reliability states of the source s,
ity states of the destination t,
for
is a set of reliabil-
is a set of reliability states of the element
,
;
is called the model of the system reliability structure (or briefly: the structure, or
the graph), if:

it distinguishes two abstract vertices: the source s and the destination t;

there is at least one path from s to t;

each vertex from the set V is assigned a set of reliability states.
□
Let
{
{
} be a set of identifiers of the elements from the set E. Let
} be a set of acyclic paths of the graph G (2) leading from the source
s to the destination t. Because G is the consistent graph, then for each element eE
there is a path
.
Assumption 1. The source s and the destination t are reliable.
□
Assumption 2. All elements from the set E are repairable with non-zero repair
time.
Assumption 3. System and all elements from the set E are bi-state in terms of
reliability.
Justification. The functioning of real-time systems is usually assessed on the
basis of binary grading scale [8, 16, 17]. This is due to the fact that these systems
were designed in order to continuously perform all their tasks .
Let
{ },
{ },
{
},
, and
{
states, respectively, of the source s, of the target t, of elements
}be sets of reliability
,
, and
of the system, where: digit 1 means the state of ability, digit 0 means the state of
inability of the appropriate element from the set V or of the entire system.
Since
and B are binary sets, to perform arithmetic operations on
elements of these sets the Boolean algebra can be used, determined by the algebraic
{
}.
structure
149
Zbigniew Wesołowski
Assumption 4. Failure of any element
does not affect the functioning of
the other elements from the set E. Moreover, the length of the time interval of the
element
staying in the state
isn't affecting lengths of time intervals
of staying remaining elements from the set E in their reliability states.
Let I={{0}K}={0,1,…,mE} be a set composed of the system identifier (element with a value of zero) and the identifiers of elements from the set E. Let
(
) be a probabilistic space, where:
{
} is
a set of elementary events,
is an elementary event meaning that the
,  is a family of
observed value of the studied characteristic is taking out
random events being a distinguished σ-body of subsets of the set ,
is a
probabilistic measure.
The model D (1) defines the tuple,
({
where:
,
}
)
(3)
is a model of the time evolution of reliability states of the element
is a model of the time evolution of reliability states of the system.
The model
(3) defines the quintuple,
(
{
(4)
})
where:
is a random variable being a stochastic model of the process
of the element
reliability state changes over time,
is a random
variable being a stochastic model of the process of changes of time intervals length
of staying the element in the reliability state
.
Assumption 4 implies that the random variables ,
bles
,
,
(
and
;
The model
(
, are independent, i.e.
)
, for
, and
(
, and random varia)
)
, for:
, for:
;
,
,
,
.
(3) defines the quintuple,
{
(
})
(5)
150
Application of Computer Simulation for Examining the Reliability of Real-Time Systems
where
is a random variable being a stochastic model of the process of
the system reliability state changes over time,
is a random variable
being a stochastic model of the process of changes of time intervals length of staying of the system in the reliability state
.
Assumption 4 implies also that the random variables
(
)
i.e.
.
and
are independent,
The relationship between the reliability states of all elements from the set V and
the system reliability states, defines the structural function
threatens to
become the following general form,
(6)
Information Reliability Surveys
The aim of information reliability surveys is gathering statistical samples about
time intervals length of elements
and of the entire system staying in
individual reliability states. Reliability surveys of elements
are commonly
carried out in real conditions of life [1, 14, 18]. In the view of the fact that direct
reliability surveys of real time systems are very difficult and often impossible [16],
statistical data about time intervals length of the system staying in its reliability states
are typically generated by a computer program simulating the use process of the
examined system.
Let us consider a survey
( )
}consists of measurements
( )
( )
, where:
)
( )
,
( )
( )
.
( )
( )
is the final
in the state a.
is carried out according to the discrete plan for forms,
{
}
,
, of staying the element
is the initial moment, and
moment of the time interval of staying the element
The experiment
,
relies on repeated observations of the length
of the time interval [
in the state
} compound of experiments
{
. The experiment
Each measurement
{
,
(7)
151
Zbigniew Wesołowski
where
is the number of repetitions of the measurement
( )
[
The vector
measurement
(
( )
)
]
( )
(or sample), where
{
is an observed for the m-th times
]
( )
is called a result of the experiment
[
matrix
in the state
} be a set of numbers of observations
[
]
.
is called a result of the
the length of the time interval of staying the element
Let
,
.
. The matrix
(or sample). The
is called a result of the survey Q (or sample).
Let us assume that components of the vector
( )
( )
(4), i.e.
, for
.
are realizations of the random
variable
Simulation Model of the Use Process of the System
Simulation of the use process of the system is most often conducted with the
method of discrete event [4, 5, 15, 17]. Let
( )
quence of moments
( )
,
be the system time. The se-
, is called incidents of discrete events, where
is the initial moment of the simulation, and
moment of the simulation.
Assumption 5. In the moment
( )
( )
is called the current
all elements from the set E are able.
Events are induced by changes of reliability states of elements
Let
( )
be a reliability state of the element
under the assumption 5, we have
( )
. Let
( )
( )
, for
in the moment
[
( )
( )
̂
( )
( )
( )
.
, wherein
. Let us introduce the marking
be a reliability state of the system in the moment
The method of discrete event consists in generating, in moments
dorandom numbers ̂
,
( )
( )
.
, of pseu-
determining the random length of the time interval
) of staying the element
in the state
, for
. These
152
Application of Computer Simulation for Examining the Reliability of Real-Time Systems
numbers are generated by pseudorandom generators ̂
[6, 17, 19] of which distri-
butions are fitted with distributions of random variables
( )
In moments
,
, the calculated current reliability state of the
( )
system from the relation (6). Let
( )
the moment
( )
moment
be a reliability state of the system in
earlier to the current moment
( )
(
( )
staying the systemis observed, for the
)
( )
( )
, i.e.
( )
( )
( )
,
is a number of occurrences of the state
( )
( )
( )
. If in the
a change of the reliability state of the system, in relation to the state
, took place, then the length
( )
(4).
of the time interval of
, times, in the state
until the moment
( )
, where
, where in
.
Simulation Experiments
The aim of simulation experiments is to generate, using a computer program that
simulates the use process of the system, data that are realizations of random variables
(5),
.
Let us consider the simulation survey
periments
,
. Each experiment
( )
length
{
( )
relies oin repeated observations of the
of the time interval [
ing the system in the state
} compound of simulation ex-
, where:
( )
( )
( )
)
,
( )
( )
is the initial moment, and
, of stay( )
is the
final moment of the time interval of staying the system in the state b.
The simulation survey S is carried out according to the discrete plan for forms,
{
where
(8)
is the number of repetitions of the experiment
The vector
iment
}
[
( )
(or sample), where
( )
( )
(
)
]
.
is called a result of the exper-
is the observed for the n-th times length of
153
Zbigniew Wesołowski
the time interval of staying the system in the state
set of numbers of observations
( )
{
. Let
[
. The matrix
} be a
]
is
called a result of the survey S (or sample).
Let us assume that components of the vector
( )
(5), i.e., ( )
, for
.
are realizations of the random
variable
Reliability Analysis
Reliability analysis consists in the estimation of selected reliability measures of
the system based on the results of the simulation survey carried out in accordance
with the plan P(8).
(
Let ( )
) and
( )
( )
(
) be, respective-
ly, the distribution function and the survival function of the random variable
where:
,
,
.
The most widely used reliability measures of the system are:

the expected value of the ability time of the system;

the variance of the ability time of the system;

the ability function of the system;

the expected value of the inability time of the system;

the variance of the inability time of the system;

the inability function of the system;

the coefficient of the system availability.
The Expected Value of the Ability Time of the System. An expected value
( ), is called the expected value of the
of the random variable
, i.e.
ability time of the system. This reliability measure is the average length of the time
interval of the system staying in the ability state.
The Variance of the Ability Time of the System. A variance of the random
[(
) ], is called the variance of the ability time of the
variable , i.e.
system. This reliability measure is the observed mean square deviation of the ability
time of the system from its expected value. The variance
is a measure of the
volatility of the time interval length to maintain the system in the state of ability.
154
Application of Computer Simulation for Examining the Reliability of Real-Time Systems
The Ability Function of the System. An ability function of the system
( )
( ) expresses the probability of the system staying in the ability
state at least up to the moment
.
The Expected Value of the Inability Time of the System. An expected value
( ), is called the expected value of the inaof the random variable , i.e.
bility time of the system. This reliability measure is the average length of the time
interval of the system staying in the inability state.
The Variance of the Inability Time of the System. A variance of the random
[(
) ], is called the variance of the inability time of
variable
, i.e.
the system. This reliability measure is the observed mean square deviation of the
Inability time of the system from its expected value. The variance
is a measure
of the volatility of the time interval length to maintain the system in the state of
inability.
The Inability Function of the System. The inability function of the system
( )
( ) expresses the probability of the system to stay in the state of
inability at least up to the moment
.
The Coefficient of the System Availability. A coefficient of the system availability
expresses the probability of ability of the system at the start of its
use.
From a practical point of view, the important is the case where the random variables
,
, have the Weibull distributions [11]. In this case, the reliability
measures are determined by the following formulas,
( )
[ ( ) ],
(
)
[(
(
) ]
.
(9)
),
[ (
(10)
)
(
) ],
(11)
(12)
155
Zbigniew Wesołowski
Example 1. Let us consider an issue of the reliability analysis of a server farm
(fig. 1a) compound of a web server (computer k1), of a database server (computer
k2), and of two application servers (computers k3 and k4). Let us suppose that all
computers are repairable with non-zero repair time. Hereinafter of the work the
server farm is called simply the system. We consider the system to be able if able
are: the web server, the database server and at least one of the application servers.
A model of the reliability structure of the considered system is the graph (fig. 1b)
in the form,
(
)
where:

{
} is a set of vertices, where:
,
,

;
),
(
{
(
(
,
),
},
(
),
{
},
, is a set of edges, where:
(
),
(
),
);

{ },
(a)
{
{
},
,
{
,
} is a set of reliability states, where:
}.
(b)
Fig. 1. The server farm: (a) a technical structure; (b) a reliability structure
The structural function (6) takes the following form,
( )
(
)
156
Application of Computer Simulation for Examining the Reliability of Real-Time Systems
where:

[
] is a vectorof reliability states of all elements from the
set E, where:
,
;

are ability paths:
Information Reliability Surveys. Let
,
.
be results of information reliability
surveys carried out in accordance with the plans
(7), where:
, for
.
Based on these results the examinations concerning the goodness-of-fit of observed
distributions of the samples
Table
1
shows
(̂ (
) ̂ (
the samples
.
Table
1.
(̂ (
the
with the Weibull distributions were conducted.
evaluations
of
the
2
3
4
of
distributions
)) estimated be the maximum likelihood method based on
The
evaluations
) ̂ (
of
the
parameters
of
distributions
))
̂ (
1
parameters
0
1
0
1
0
1
0
1
)
2.00695
1.55287
1.9512
2.01148
1.92307
2.17924
2.6784
2.63988
Table 2 shows the results of the Pearson
̂ (
)
24.2095
747.824
24.4713
721.544
27.5675
523.314
34.7878
827.702
test [2, 7, 12, 16] used to verify the
null hypothesis stating about the goodness-of-fit of observed distributions of the
samples
with the distributions
(̂ (
evaluation of the test statistics from the sample
the p-value
) ̂ (
, ̂ [ ̂(
 * for the fixed value of the test statistics.
)), where ̂ (
) is an
)]is an evaluation of
157
Zbigniew Wesołowski
Table 2. The results of the Pearson
test
̂(
0
1
0
1
2
4
̂ [ ̂(
)]
0.985612
0.191166
0.70372
0.70372
0.51843
0.879487
0.51843
0.879487
1.0
8.7
3.8
3.8
5.2
2.4
5.2
2.4
0
1
0
1
3
)
According to table 2, at the level of significance
  0.05 , there are no reasons
to reject the null hypotheses stating the goodness-of-fit of the observed distributions of the samples
,
with the distributions
(̂ (
) ̂ (
)), for
.
Reliability Analysis. Let
accordance with the plan
be results of simulation survey carried out in
(8), where:
. Based on these results, the
examinations concerning the goodness-of-fit of observed distributions of the samples
with the Weibull distributions were conducted. Table 3 shows the evaluations of the parameters of distributions ( ̂ ( ) ̂ ( )) from the samples ,
.
Table
(̂ (
3.
The
evaluations
of
the
parameters
of
distributions
) ̂ ( ))
0
1
̂ (
1.89816
1.32287
̂ ( )
)
Table 4 shows the results of the Pearson
24.7533
350.52
test used to verify the null hypothe-
sis stating about the goodness-of-fit of observed distributions of the samples
with the distributions ( ̂ ( ) ̂ ( )), where ̂ ( )is an evaluation of the test
statistics from the sample
.
158
Application of Computer Simulation for Examining the Reliability of Real-Time Systems
Table 4. The results of the Pearson
0
1
̂( )
15.7876
34.0531
test
̂ [ ̂ ( )]
0.864209
0.0643943
Table 4 shows that, at the level of significance
  0.05 , there are no reasons
to reject the null hypotheses stating the goodness-of-fit of the observed distributions of samples with the distributions ( ̂ ( ) ̂ ( )),
.
The results of the reliability analysis of the system are given in table 5 and in figure 2. Table 5 shows the evaluations of reliability coefficients calculated on the basis
of the values of parameters of the distributions ( ̂ ( ) ̂ ( )),
.
Table 5. The evaluations of the reliability coefficients (10)-(12) of the system
̂( )
̂ ( )
144.86
0.936256
60614.1
) being evaluations of the survival
Figure 2 shows the plots of functions ̂ (
(
), for
functions ( ) (9) of the random variables
.
0
1
̂ ( )
21.9658
322.627
(a)
(b)
Fig. 2. Plots of the reliability functions of the system: (a) a plot of an
) of the inability function of the system
( ) (9); (b) a
evaluation ̂ (
) of the ability function of the system ( ) (9)
plot of an evaluation ̂ (
Zbigniew Wesołowski
159
Summary
The paper presents a method for the reliability analysis of systems based on data
generated by a computer program that simulates the use process of the considered
system. The technique of the discrete event simulation was applied. Events are
induced by changes of reliability states of elements of the system reliability structure
occurring in random moments t(s), s=0,1,… . Based on the reliability states of elements e1,…,ek, is calculated, using the structural function (6), the reliability state of
the system. During the simulation lengths of time intervals of staying the system in
individual reliability states are observed. On the basis of these data values of the
reliability measures of the system are stated. In the work it is assumed that random
variables, which are stochastic models of processes of changes time intervals lengths
of staying both elements of the reliability structure and the system, have different
the Weibull distributions.
Directions for further work should relate to the construction of software simulators, because the results of previous studies [3, 15, 17] indicate that as a result of the
positive autocorrelation of sequences of random numbers generated by software
methods, the data generated by simulators have a large variance. It causes that the
results of reliability analysis of systems are often inadequate.
Bibliography
[1] Dodson B., Nolan D., Reliability Engineering Handbook, CRC Press, New York,
1999.
[2] Fan J., Yao Q., Nonlinear Time Series. Nonparametric and Parametric Methods,
Springer-Verlag, New York, 2003.
[3] Faulin J. (editor), Juan A.A. (editor), Martorell S. (editor), Ramirez-Marquez
J.E. (editor), Simulation Methods for Reliability and Availability of Complex Systems,
Springer-Verlag, London, 2010.
[4] Fishman G.S., Symulacja komputerowa. Pojęcia i metody, PWE, Warszawa, 1981.
[5] Fishman G.S., Discrete-event Simulation. Springer, Heidelberg, 2001.
160
Application of Computer Simulation for Examining the Reliability of Real-Time Systems
[6] Gentle J. E., Random Number Generation and Monte Carlo Methods, Springer, Heidelberg, 2003.
[7] Gentle J. E. (editor), Härdle W. (editor), Mori Y. (editor), Handbook of Computational statistics. Concepts and Methods, Springer, New York, 2004.
[8] Karpiński J., Korczak E., Metody oceny niezawodności dwustanowych systemów technicznych, Instytut Badań Systemowych PAN, Warszawa, 1990.
[9] Kleijnen J. P. C., Design and Analysis of Simulation Experiments, Springer, Stanford,
2010.
[10] Korzan B., Teoria niezawodności, Wojskowa Akademia Techniczna, Warszawa,
1983.
[11] Magiera R., Modele i metody statystyki matematycznej. Cześć I. Rozkłady i symulacja
stochastyczna, Oficyna Wydawnicza GiS, Wrocław, 2005.
[12] Magiera R., Modele i metody statystyki matematycznej. Cześć II. Wnioskowanie statystyczne, Oficyna Wydawnicza GiS, Wrocław, 2007.
[13] Pacut A., Prawdopodobieństwo. Teoria. Modelowanie probabilistyczne w technice, WNT,
Warszawa, 1985.
[14] Tobias P. A., Trindade D., Applied Reliability, Chapman and Hall/CRC, Boca
Raton, 2011.
[15] Wainer G.A. (editor), Mosterman P.J. (editor), Discrete-event Modeling and Simulation: Theory and Applications. CRC Press, New York, 2010.
[16] Wesołowski Z., Analiza niezawodnościowa niestacjonarnych systemów czasu rzeczywistego, Redakcja Wydawnictw Wojskowej Akademii Technicznej, Warszawa, 2006.
[17] Wesołowski Z., Simulation Research on the Reliability of Systems, Redakcja Wydawnictw Wojskowej Akademii Technicznej, Warszawa, 2013.
[18] Zamojski W., Niezawodność i eksploatacja systemów, Politechnika Wrocławska,
Wrocław, 1981.
[19] Zieliński R., Wieczorkowski R., Komputerowe generatory liczb losowych, WNT, Warszawa, 1997.
Zbigniew Wesołowski
161
Summary
Key words: reliability of systems, computer simulation, reliability measures of systems
The aim of the work is to discuss the use of computer simulation to study the reliability of realtime systems. Methodologies of the mathematical modeling and the simulation modeling were described. The presented algorithm of the simulation uses the method of discrete event. The quoted
example which shows the results of applying the proposed method to the analysis of reliability of
a server farm.
Zastosowanie symulacji komputerowej
do badania niezawodności systemów czasu rzeczywistego
Streszczenie
Słowa kluczowe: niezawodność systemów, symulacja komputerowa, miary niezawodności
systemów
Celem pracy jest omówienie sposobu wykorzystania symulacji komputerowej do badania niezawodności systemów czasu rzeczywistego. Przedstawiono metodologie modelowania matematycznego
i modelowania symulacyjnego. Omówiono algorytm symulacji wykorzystujący metodę kolejnych
zdarzeń. Przytoczono przykład obrazujący wyniki zastosowania proponowanej metody do analizy
niezawodności farmy serwerów.
ZESZYTY NAUKOWE
Uczelni Warszawskiej im. Marii Skłodowskiej-Curie
Nr 4 (46) / 2014
Gustaw Konopacki
Maria Skłodowska-Curie Warsaw Academy
Modelling software testing process
with regard to secondary errors
Introduction
Software testing is a finishing step process of creating systems and applications.
The importance of this step is that it plays the role of one of the main factors influencing the reliability of the software and, indirectly, the evaluation of the entire
system.
Software testing process is not a single act but is an exploded during a repetitive
sequence of steps in which, in simple terms, to run this software on a specially
prepared test data, it identifing errors in the software and deletes them. The purpose
of each step of testing is to detect as many errors possibly undetected in previous
steps testing and removing as many of the detected errors.
On the longevity of the testing process, usually expensive, it affects the assessment criteria of obtained level of software reliability. In general, these criteria are
formulated on the basis of software reliability models (the first of which have already appeared in the 70s of the last century) and were based on an assessment of:

the expected number of errors remaining in the software at the end of the
next stage of testing,

the expected length of the segment duration of software testing to detect
another error.
164
Modelling software testing process with regard to secondary errors
Early software reliability models allow the determination of the above-mentioned
size only after the test. This is a significant drawback that hinders their use in planning the testing process, even in the time dimension. Therefore, there are still attempts to build such models, which would be useful for planning the software testing process. One such proposal is inclouded in [4] and is a generalization of previously published models by Shooman and Jelinki-Moranda.
Very strong was the assumption adopted in model [4], that detect every error is
equivalent to its removal. Since in practice, software testing error detection does not
necessarily mean its immediate removal, so getting the right model [4], would require resignation from the assumption. This article formulateds two models that
consider the possibility of not removing the detected error and, even worse: introduction of an additional error during the removal of the previous one. In both
models, assumptions about the software testing process, presented in [4] are as
follows:

before testing the software is N errors,

errors are independent of each other, i.e. the detection and removal of any
of them does not affect the detection of any of the other,

errors are indistinguishable,

errors are detected individually,

each step of the testing process starts with the simultaneous detection of all
errors currently in the software,

the length of time interval j elapsed from the start of testing until the detection of the j-th in the order of error is a random variable with exponential
distribution with parameter dependent on the number of detected error:
(1)
P j  x  1  e ii x ,
x  0,
j  1,2,..., N,
 j for all errors j = 1,2, ..., N are the same and equal 
In addition, pending further models (Model I and Model II) sad to the following
assumptions:

Model I - error detection does not mean its absolute removal from of software, but it can occur with the probability r (0,1],
165
Gustaw Konopacki

Model II - error detection does not mean its absolute of removal from of
software, but it can occur that with probability q[0,1] that an additional error, so-called. secondary error will be introduced.
These models test will be used to designate the following most important characteristics of the software testing process:

the expected value of the number of errors remaining in the software after
the time t from the start of the test,

the expected value of time from the start of testing software to stay in the
moment exactly j, (j=0,1,2,…,N) errors.
Model I
Adopting presented assumptions, the software testing process can be interpreted
as a stochastic process (N, T) which denotes the number of errors in the software
after the time t from the start of testing this software and it is a Markov process DC
class (discrete states, continuous parameter (time)), where N = {0,1,2, ...} is the set
of states, and T={t: t ≥ 0} - a set of parameters (time).
Based on the assumptions adopted for the considered model one can take state
the following according to the intensity of the transition between of the states of
process:
(2)
λ j, j 1  j 1  r  λ,
λ j, j   j 1  r  λ,
j  1,2,...
Graphic presentation of the transition matrix of the processis shown in Figure 1.
166
Modelling software testing process with regard to secondary errors
Fig. 1. Graphic presentation of transition matrix of a stochastic process (N,
T) describing the Model I software testing.
To determine the probability distribution vector of the process of finding the
above a particular state after time tof software testing
pt   p0 t , p1 t , p2 t ,...
must solve the following system of differential equations:
 p'0 t   1  r  λ p1 t 
 '
 p1 t   21  r  λ p2 t   1  r  λ p1 t 

 ...
 p'j t   j 1  r  λ p j  1 t   j 1  r  λ p j t ,

(3)
j  2,3,...
with the initial conditions
0,
p j 0   
1,
j  N,
j  N,
(4)
arising from the fact that, at the start of software testing to contain an N>0 errors.
167
Gustaw Konopacki
The system of equations(3) will be solve dusing the following generating function:

F s, t    p j t   s j ,
s  1.
j 1
(5)
Using the system of equations (3), function(5), and making the appropriate transformationswe obtain:



j 0
j1
j 1
 p'j t   s j  1  r  λ  j p j t   s j  1  1  r  λ  j p j t   s j
(6)
Taking into account the equation (6), the conversions
 F s, t   '
  p j t   s j ,
t
j 0
 F s, t  
  p j t   s j 1
s
j 1
the following differential equation is obtained:
 F s, t   '
  p j t   s j ,
t
j 0
 F s, t  
  p j t   s j 1
s
j 1
The solution of equation (7) is a function ([5]):

F s, t   s e 1r  λt  1  e 1 r  λt

N
.
(8)
Expanding function (8) in a power series with respect to s is obtained:


 N   j 1  r λt
N
  e
1  e  1  r  λt , j  0,1,2,...,N
p j t    j 
 
0
, jN.

(9)
168
Modelling software testing process with regard to secondary errors
Hence the expected value and variance of the process (N, T) express the formulas:
E  N t   N  e  1r  λt ,


D 2  N t   N  e  1r  λt 1  e  1r  λt .
(10)
In order to determinethe expected time ofthe software testinguntilit remains
inthej, (j=0,1,2,…,N)errors, the following submissions are accepted.
Let (i,j) is the random variable defining there sidence time of the process (N,
T) in a subset of states of Ni={j+1, j+2, ..., i}, i.e., the time that elapses from when
the process reaches the state I untilit reaches the state j. Let theta (j) denote
a random variable defining the process residence time (N, T) in the state j.
Regarding the considered model of software testing as well as using the determination of the residence time of the homogeneous Markov process in a given set of
states, you can specify the following formulas:
τ  j  1, j   θ  j  1,
j  0,1,2,...
τ i, j   θ i   τ i  1, j ,
i  j  2, j  3,...
(11)
From equations(11) it follows that:
i
τ i, j    θ k ,
k  j 1
i  j.
(12)
Knowing thatthe distribution function of the random variable (j) is of the
form:
Pθ j   x  1  e  j 1r  λx ,
j  1,2,...
We obtain the following expression for the expected value of a random variable
(i,j):
E τ i, j  
i
1
1
,

1  r  λ k  j 1 k
j  0,1,2,...,
i  j  1, j  2,...
(13)
169
Gustaw Konopacki
From (13) if i = N, then the expected value of the time that elapses between the
beginning of testing until j of errors will remain in the software will be expressed
with the formula
E τ  N, j  
N 1
1
,

1  r  λ k  j 1 k
j  0,1,2,...,N  1.
(14)
Model II
Having considered a model of software testing - in accordance with adopted earlier assumptions – it allows for a situation in which the error detected will not only
be removed, but additional (secondary) error will be introduced, which means that
the number of errors in the software will increase by 1. Therefore, stochastic process (N, T) specifying the number of errors in software after the time t from the
start of testing software can be interpreted as a stochastic Markov process DC class:
states discrete, continuous parameter (time).
Based on the assumptions adopted for the considered model we can formulate
the following according to the intensity of the transition between of the states of
process:
μ j, j  1  jq1  r  λ,
λ j, j   j r  q  2 rq  λ,
j  1,2,...
λ j, j 1  jr 1  q  λ
(15)
Graphic presentation of the transition matrix of the process is shown in Figure 2.
170
Modelling software testing process with regard to secondary errors
Fig. 2. Graphic presentation of transition matrix of a stochastic process
(N,T) describing the Model II software testing
To determine the probability distribution vector of the process of finding the
above a particular state after time t of software testing
pt   p0 t , p1 t , p2 t ,...
we must solve the following system of differential equations:
 p'0 t   q1  r  λ p1 t 
 '
 p1 t   2 q1  r  λ p2 t   r  q  2 rq  λ p1 t 

 ...
 p'j t   1  j q1  r  λ p j  1 t   j r  q  2 rq  λ p j t    j  1 r 1  q  λ p j  1 t ,

j  2,3,...
(16)
with the initial conditions
0,
p j 0   
1,
j  N,
jN.
(17)
By doing the appropriate transformation of the equation set (16) using the
generating function (5) is obtained by the following differential equation:
171
Gustaw Konopacki
 F s, t 
 F s, t 
 q1  r  λs 2  q1  r  λ r 1  q  λ  s  r 1  q  λ
,
t
s


F s,0   s N .
The solution of equation (18) is an expression of the form [5]:


(18)

N
 r 1  q  λ 1  e q1 r  λ r 1 q  λ t  q1  r  λ r 1  q  λe q1 r  λ r 1 q  λ  t  s 

F s, t   
 r 1  q  λ q1  r  λe q1 r  λ r 1 q  λ t  q1  r  λ 1  e q1 r  λ r 1 q  λ t  s 




(19)
By doing the appropriate transformation, (19) gives the following equation determining the expected value of the process (N, T), i.e. the expected number of
errors remaining in the software after the time t testing this software:
E  N, t   N  e r q  λt .
(20)
Formula (20) shows that the number of errors in the software with the passage
of time its testing will be decrease of when the following condition is met:
r  q.
Let (j+i,j) be the random variable defining the residence time of the process
(N, T) in a subset of states of Ni={j+1, j+2, ..., j+i}, i.e., the time that elapses from
the time when the process reached a state j+i until it reaches the state j for the first
time. Let (j) means the same variable as in Model I.
In accordance with the assumptions about model II and the definition of the
residence time Markov process in the specified set of states, you can formulate the
following equality:
172

Modelling software testing process with regard to secondary errors
the case when the process (N,T) goes from the state j+1 to the state j or
from the state j+i to state
j+i-1, which occurs with the probability
r 1  q 
,
q1  r   r 1  q 
τ  j, j  1  θ  j  1,
j  0,1,2,...
τ  j  i, j   θ  j  i   τ  j  i  1, j ,
i  2,3,...

(21)
the case when the process (N,T) goes from the state j+1 to the state j+2 or
from the state j+i to state
j+i+1, which occurs with the probability
q1  r 
,
q1  r   r 1  q 
τ  j  1, j   θ  j  1  τ  j  2, j ,
τ  j  i, j   θ  j  i   τ  j  i  1, j ,
j  0,1,2,...
i  2,3,...
(22)
Determination of the expected value of a random variable (j+i,j) is possible by
solving the following system of algebraic equations:
q1  r 

 E τ  j  1, j   E θ  j  1  q1  r   r 1  q   E τ  j  2, j 

q1  r 
r 1  q 
 E τ  j  2, j   E θ  j  2  
 E τ  j  3, j  
 E τ  j  1, j 

q1  r   r 1  q 
q1  r   r 1  q 
 ...

q1  r 
r 1  q 
 E τ  j  i, j   E θ  j  i  
 E τ  j  i  1, j  
 E τ  j  i  1, j 

q1  r   r 1  q 
q1  r   r 1  q 

i  3,4,...

(23)
173
Gustaw Konopacki
Since the system of equations (23) is unlimited, classical methods of solving algebraic equations which are limited cannot be applied. In [5] an iterative method is
proposed for solving of system of equations (23), permitting to obtain a satisfactory
estimate of the random variable(j+i,j),which is expressed by the following formula:
E τ  j  i, j  
i
1
1
,

r  q  λ k 0 j  i
j  0,1,2,...,
i  j  1, j  2,...
(24)
Hence, an estimate of the expected value of the time that has elapsed since the
start of testing until the software is error is expressed by the relation:
E τ  N, j  
N 1
1
,

r  q  λ k  j 1 k
j  0,1,2,...,N  1.
(25)
Conclusions
Formula obtained from the analysis of the presented models of software testing
are consistent with intuition: the higher the intensity  error detection, and the
greater the probability r to remove the detected error, the shorter the duration of
software testing. The inclusion in the Model II, the probability q introduction of a
secondary errors right results achieved and allows you to uset hemin the practice of
software testing. The article assumes the size of, r, q, and N as fixed, although in
practice they are usually not exactly known and can only be estimated (e.g.[4]), but it
does not disavow carried out considerations and the results obtained. The listed size
characterize, some aspects of both the process of designing, manufacturing and
testing of the software. The initial number N of errors in the software will dependcrucially on the method and tools for designing and producing the well – knowns of
tware complexity. The  intensity of the error detection is derived from the method
used for software testing, while the probability r and q characterize the skills and
experience of the team of testers. In the process of software testing, with the pas-
174
Modelling software testing process with regard to secondary errors
sage of time, all the size changes: decrease N,  and q, and r is in creasing, but this
article includes only a change in the number of errors in the software.
Bibliography
[1]. Gichman I. I., Skorochod A. W., (1968), Wstęp do teorii procesów stochastycznych,
PWN, Warszawa.
[2]. Feler W., (1996), Wstęp do rachunku prawdopodobieństwa, PWN, Warszawa.
[3]. Haggstrom O., (2001), Finite markov chains and algorithmic applications, Chalmers
University of Technology.
[4]. Konopacki G., Worwa K., (1984), Uogólnienie modeli niezawodności oprogramowania
Shoomana i Jelinskiego-Morandy, in: Biuletyn WAT w Warszawie, Nr 12/1984.
[5]. Konopacki G., Pluciński I., (1989), O pewnych modelach testowania oprogramowania,
in: Biuletyn WAT w Warszawie, Nr 4/1989.
[6]. Lawler G. F., (1995), Introduction to Stochastic Processes, Chapman & Hall / CRC.
[7]. Mitzenmacher M., Upfal E., (2009), Metody probabilistyczne i obliczenia, WNT,
Warszawa.
[8]. Norris J. R., (1977), Markov Chains, Cambridge Series in Statistical and Probabilistic Mathematics.
[9]. Papoulis A., (1972), Prawdopodobieństwo, zmienne losowe i procesy stochastyczne,
WNT, Warszawa.
[10]. Ross S.M. (1996), Stochastic processes, John Wiley & Sons, New York.
Summary
Key words: software testing, Markov process
The article discusses two types of software testing models, which included the ability not
to increase the detected error (Model I) and the introduction of a possibile additional error, so-called
secondary errors (Model II). Formulas are given, allowing an estimation of the expected number of
errors remaining in the software after a certain period of its testing and the expected duration of the
process of testing them to the total the software.
Gustaw Konopacki
175
Modelowanie procesu testowania oprogramowania
z uwzględnieniem błędów wtórnych
Streszczenie
Słowa kluczowe: testowanie oprogramowania, proces Markowa
W artykule rozpatruje się dwa modele testowania oprogramowania, w których uwzględniono
możliwość niepoprawienia wykrytego błędu (model I) oraz możliwość wprowadzenia dodatkowego
błędu, tzw. błędu wtórnego (model II). Formułuje się zależności umożliwiające oszacowanie oczekiwanej liczby błędów pozostałych w oprogramowaniu po upływie określonego czasu testowania
oraz oczekiwanego czasu trwania procesu do całkowitego wytestowania oprogramowania.
ZESZYTY NAUKOWE
Uczelni Warszawskiej im. Marii Skłodowskiej-Curie
Nr 4 (46) / 2014
Kazimierz Worwa
Military University of Technology
Maria Skłodowska-Curie Warsaw Academy
Analytical method for choosing
the best software supplier
Introduction
As software becomes more and more important in the systems that perform
complex and responsible taskse.g., military defense, nuclear reactors, there are also
risks of software-caused failures. There is now general agreement about the need to
increase software reliability and quality by eliminating errors created during software
development. Industry and academic institutions have responded to this need by
improving developmental methods in the technology known as software engineering and by introducing systematic checks to detect software errors during and in
parallel with the developmental process.
For many companies manufacturing the basic problem becomes eliminating defective products. Therefore, detecting and correcting a serious software defect
would entail recalling hundreds of thousands of products. In the past 40 years,
hundreds of research papers have been published in the areas of software quality,
software engineering development process, software reliability modeling, software
independent verification and validation and software fault tolerance. Software engineering is evolving from an art to a practical engineering discipline [8].
A large number of analytical models have been proposed and studied over the
last two decades for assessing the quality of a software system. Each model must
178
Analytical method for choosing the best software supplier
make some assumptions about the development process and test environment. The
environment can change depending on the software application, the lifecycle
development process as well as the capabilities of the engineering design team [9].
Therefore, it is important for software users and practitioners to be familiar with all
the relevant models in order to make informed decisions about the quality of any
software product.
As the functionality of computer operations becomes more essential and yet
more complicated and critical applications increase in size and complexity, there is
a great need for looking at ways to quantify and predict the reliability of computer
systems in various complex operating environments [11]. Faults, especially with
logic, in software design thus become more subtle. Usually logic errors in the software are not hard to fix but diagnosing logic bugs is the most challenging for many
reasons. The fault again is usually subtle.
Let us define the terms such as software error, fault and failure [3]. An error is
a mental mistake made by the programmer or designer. A fault is the manifestation
of that error in the code. A software failure is defined as the occurrence of an incorrect output as a result of an input value that is received with respect to the specification.
What precisely do we mean by the term failure? It is the departure of the external
results of system operation from user needs. So failure is something dynamic.
A system has to be operating for a failure to occur. The term failure relates to the
behavior of the system. It is worth noting that a failure is not the same thing as
a bug or, more properly, fault. The very general definition of failure is deliberate. It
can include such things as deficiency in performance attributes and excessive response time when desired, although there can be disadvantages in defining failure
too generally.
A fault in software is the defect in the program that, when executed under particular conditions, causes a failure. There can be different sets of conditions that
cause failures. Hence a fault can be the source of more than one failure. A fault is a
property of the program rather than a property of its execution or behavior. It is
what we are really referring to in general when we use the term defect or bug.
Kazimierz Worwa
179
Software reliability modelling has become one of the most important aspects
in software reliability engineering since first software reliability models appeared.
Various methodologies have been adopted to model software reliability behaviour.
The most of existing work on software reliability modelling is focused on continuous-time base, which assumes that software reliability behaviour can be measured in
terms of time. It may be a calendar time, a clock time or a CPU execution time (see
e.g. [4, 5, 7, 8, 9, 10, 11, 12, 14]). Although this assumption is appropriate for a wide
scope of software systems, there are many systems, which are essentially different
from this assumption. For example, reliability behaviour of a reservation software
system should be measured in terms of how many reservations are successful, rather
than how long the software operates without any failure. Similarly, reliability behaviour of a bank transaction processing software system should be assessed in terms
of how many transactions are successful, etc. Obviously, for these systems, the time
base of reliability measurement is essentially discrete rather than continuous. Models
that are based on a discrete-time approach are called input-domain or run-domain
models (see e.g. [4, 5, 7, 8, 9, 10, 11, 12, 14]). They usually express reliability as the
probability that an execution of the software is successful.
In spite of permanent improvement of software project and implementation
methods which are used in the practice of software development, they still cannot
guarantee developing a complicated software system entirely free of errors. These
errors appeared during useful exploitation of software and they cause some financial
losses and other difficulties. In order to minimize the scale of such difficulties,
software users demand from developers as reliable software as possible. Unfortunately, more reliable software is more expensive so there is a practical problem to
determine a compromise on both software quality and cost requirements. In practice, determining such a compromise can be easier if there are possibilities to estimate both software reliability level and its development cost by means of appropriate measures.
The purpose of this paper is to propose a formal way of determining software
developer by formulating and solving the bicriterial optimization problem that will
both minimize the value of the number of software tasks which have incorrect
180
Analytical method for choosing the best software supplier
realization during some time period, and minimize the value of the software development cost. The method of determining the best software developer that is proposed will be illustrated by the simple numerical example.
Mathematical model of the software exploitation process
We will consider some useful software that services arriving tasks. Each input
task involves a sequence of software operations, whereas an operation is a minimum execution unit of software. The concrete sense of an operation is subject to
application context. For example, an operation can correspond to execution of a
test case, of a program path, etc. Some input data set is connected with every operation and operation consists in executing of the software with that data set. An operation can be viewed as a transformation of an input state to an output state. One
recognizes the possibility of a software failure by noting a discrepancy between the
actual value of a variable occurring during an operation and the value of that variable expected by users.
We can view the execution of a program as a single entity, lasting for months or
even years for real-time systems. However, it is easier to characterize the execution
if you divide it into a set of operations. Then the environment is specified by the
operational profile, where operational profile is a complete set of operations with
their probabilities of occurrence. Probability of occurrence refers to probability
among all invocations of all operations. In turn, there are many possible instances
of operations, each called a run. During execution, the factors that cause a particular
operation and a particular run within that operation to occur are very numerous and
complex. Hence, we can view the operations required of the program as being selected randomly in accordance with the probabilities just mentioned. The runs within those operations also occur randomly with various probabilities.
A run is specified with respect to the operation of which it is a part by its input
stateor set of values for its input variables. Input variables are variables that exist
external to an operation and influence its execution. The input state is not the same
thing as the machine state, which is the much larger set of all variable values accessible to the computer. The machine state also includes variables that do not affect a
run and are not set by it. Recurrent runs have the same input state. We judge the
Kazimierz Worwa
181
reliability of a program by the output states (sets of values of output variables created) of its runs. Note that a run represents a transformation between an input state
and an output state. Multiple input states may map to the same output state, but
a given input state can have only one output state. The input state uniquely
determines the particular instructions that will be executed and the values of their
operands. Thus, it establishes the path of control taken through the program. It also
uniquely establishes the values of all intermediate variables. Whether a particular
fault will cause a failure for a specific run type is predictable in theory. However, the
analysis required to determine this is impractical to pursue.
It is noteworthy that such a way of software working is typical for most types of
software which are exploited in practice. Reservations programs, inquiry programs,
storage economy programs, registration and record programs and many others as
typical examples can be used.
Both the human error process that introduces faults into software code and the
run selection process that determines which code is being executed at any time and
under what conditions, and hence which faults will be stimulated to produce failures, are dependent on an enormous number of time-varying variables. Hence,
researchers have generally formulated software reliability models as random processes in time. The models are distinguished from each other in general terms by
the probability distribution of failure times or number of failures experienced and
by the nature of the variation of the random process with time. A software reliability model specifies the general form of the dependence of the failure process on the
factors mentioned. You then particularize it by estimating its parameters.
It is assumed that tasks arrive in the software in a random way and time intervals
 k k = 1,2,... between succeeding tasks are independent random values with the
same distribution function G(t), t  0 .
Let L(t) mean a number of tasks which arrive in the software at time [0, t]. For
t  0 value L(t) is a random variable, L(t) {1,2,3,...} . The process
{L(t), t  0} is a stochastic process with continuous time parameter and countable infinite set of states.
182
Analytical method for choosing the best software supplier
The result of the growing maturity of software development is that it is no longer
adequate that software simply works; it must now meet other customer-defined
criteria. Software development is very competitive, and there are many competent
software developers spread throughout the world.
Surveys of users of software-based systems generally indicate that users rate on
the average the most important quality characteristics as: reliability, rapid delivery,
and low cost (in that order). In a particular situation, any of these major quality
characteristics may predominate. The quality of the software system usually depends
on how much time development (especially testing) takes and what technologies are
used. On the one hand, the more time people spend on development and testing,
the more errors can be removed, which leads to more reliable software; however,
the testing cost of the software will also increase. On the other hand, if the testing
time is too short, the cost of the software could be reduced, but the customers may
take a higher risk of buying unreliable software [11]. This will also increase the cost
during the operational phase, since it is much more expensive to fix an error during
the operational phase than during the testing phase.
When choosing a software developer, the user is interested in software quality
(measured for example by a failure intensity), development time and development
cost. For the purpose of this paper a software developer will be characterized by a
probability of correct realization by the software of a single task, i.e. probability of
correct execution of the software with single input data set. Let q mean this probability. Respectively, let p = 1 - q mean a probability of incorrect realization of
single task, whereas realization of a single task is incorrect if some errors appear
during execution of the software with data set connected with that task. It is obvious that probabilities q and p = 1 - q characterize software developer. In general,
value of q depends on the development methods and technologies, the lifecycle
development process as well as the capabilities of the engineering design team.
Let N(t) mean a number of tasks which have incorrect realization during time period [0, t], i.e. during service in which some software errors appear. If we assume
that none task which arrives in the software will be lost (system with an infinitive
queue) value N(t) can be determined as follows
183
Kazimierz Worwa
L( t )
N(t)   X n ,
(1)
n 0
where values X n , n = 1,2, ..., L(t) are random variables with the same zeroone distributions
P( X n  1 )  p , P( X n  0 )  q .
For a some natural number m random variable N(t) has the following binomial
distribution
P( N ( t )  n L( t ) m
 m  n m  n
  p q
for m  n
(2)
)   n 
 0
for m  n

where n=0,1,2,... .
Taking into account (1) and (2) we can determine probability distribution
function of random variable N(t) as a border distribution of two-dimentional random variable (N(t), L(t))

P(N(t) = n) =  P(N(t)  n L(t) m )  P( L( t )  m ) , n=0,1,2,...
(3)
m 0
Considering prior assumptions probability P(L(t)=m) can be determined as follows [1, 6]
P(L(t) = m)= Gm (t) - Gm1 (t),
where Gm (t) ,
that has the form
m = 0,1,2,... (4)
m  1 , means distribution function of random variable t m ,
m
t m   k . (5)
k 1
It is assumed that G0 (t)  1 and G1 (t)  G(t) .
Then, taking into account (2) and (4), we can determine probability distribution
function of random variable N(t) as follows
184
Analytical method for choosing the best software supplier
 m
P(N(t) = n) =   p n q m  n [ Gm (t) - Gm 1 (t)] , n=0,1,2,... (6)
n
A mean value of number of incorrect serviced tasks at time period [0, t] has a
following form
m
E [ N ( t )]   n  P( N ( t )  n ) (7)
n 0
where probability P( N ( t )  n ) is stated by (6 ).
- t
For example, if G ( t ) = 1 - e , i.e. when random variables
 k , k = 1,2,... ,
have the same exponential distribution with  parameter , we will obtain

 m
( t) m t
P(N(t) = n) =    p n q m n
e , n=0, 1, 2, …
m!
mn  n 
from where, after simple transformation, we have
P(N(t) = n) = (pt) n
e  pt
, n=0,1,2
n!
(8)
Expression to calculate a mean value of random variable N(t) has in this case the
form
E [ N ( t )]  pt . (9)
Using (6) we can determine probability of event with no errors to appear during
useful exploitation of the software at time period [0, t]. This probability can be
obtained from (6) by taking n=0. Both mentioned probability and mean value of
number of incorrect tasks at time [0, t] in expression (7) can be used as reliability
measures of the software under investigation. It is worth to add that these measures
- unlike most reliability measures which can be found in subject literature, e.g. in [5,
10, 11, 12] take into account not only reliability level of the software but circumstances of its exploitation as well.
Formulation of a bicriterial optimization problem of choosing a software developer.
Kazimierz Worwa
185
Mean value of number of incorrect serviced tasks during useful exploitation of
the software at a given time period, appointed by (7), can be used as one of criteria
(quality criterion) to determine a compromise which reconciles requirements of user
of a software regarding its both reliability level maximization and production cost
minimization.
A practical problem of choice of the best software developer by a potential user
of that software will be considered. The user can choose one of a number of possible developers of the software, taking into account characterization of both software exploitation circumstances and technology of software development.
Let I mean set of numbers of all possible software developers
I = {1, 2, ...,i, ..., I}.
From software user’s viewpoint the i-th developer will be characterized by a pair
of numbers ( qi , k i ) , where qi means probability of a correct execution of the
software with a single input data set if the software is developed by the i-th developer, and kimeans cost of the software development by the i-th developer.
Because individual developers use different projects and implementation methods, have different computer equipment, use different organization and management styles and so on, characteristics qi , k i can be different for every developer.
The software exploitation circumstances, i.e. circumstances in which it will be
used, will be characterized by means of function G(t) which is distribution function
of time intervals between succeeding demands of use of the software. We still assume that time intervals are independent random variables with the same distribution.
A choice which is made by the user will be characterized by means following zero-one vector
X  ( x1 , x2 ,..., xi ,.., x I )
where
1 if the software is developed by the i - th developer
xi  
(10)
0 if not
while
186
Analytical method for choosing the best software supplier
I
x
i 1
i
 1.
(11)
Expression (11) ensures that software is developed by exactly one developer .
Let N t (X) mean software reliability coefficient for choice X, that is interpreted
as a mean value of incorrect serviced tasks arriving in the software at time period [0,
t].
According to (7) coefficient N t (X) can be determined as follows
I
N t (X)   xi E[N(t)] , (12)
i 1
where value E[N(t)] is stated by (7) with substitutions p i , q i instead of p and q
respectively.
Let K(X) denote a development cost of the software for choice X. This cost can
be determined as follows
I
K ( X )   xi  k i ,
(13)
i 1
where k i means cost of the software developed by the i-th developer.
On the base prior assumptions and expressions that were obtained, we can formulate the following bicriterial optimization problem of choice of software develloper
( X , F , R ) , (14)
where:
X is a feasible solution set which is defined as follows:
X = {X = (x 1 , x 2 ,..., xi ,..., x I ) : X complies with constraints (10),(11)}
F is a vector of quality coefficient which has the form
F(X) = (N t (X), K(X)) ,
187
Kazimierz Worwa
where values N t (X) , K(X) are determined by expressions (12) and (13) respectively;
R is a so-called domination relation which is defined as follows
R  {( y1 , y 2 ) Y  Y : y11  y 21 , y12  y 22 } ,
y1  ( y11 , y12 ) ,
y 2  ( y 21 , y 22 ) ,
where as Y is the so-called objective function space
Y = F( X ) = {y = (N t (X), K(X)) : X  X } .
The problem (14) is a bicriteria optimization problem with linear objective functions and linear constraints. A solution of this problem can be obtained by using
well known methodology of solving multiple optimization problems [ 2, 13].
According to that methodology, as a solution of the problem (14), we can determine:

a dominant solution set,

a nondominant solution set,

a compromise solution set.
Taking into account that values of objective functions N t (X) and K(X) are
inverse (in sense that if a value N t (X) is decreased, the value K(X) is increased),
it is reasonable to expect that the dominant solution set will be empty. In a such
situation it is a practically recommended approach to determine a nondominant
solution set. If this set is very numerous we can narrow it down by determining the
so-called compromise solution, i.e. a solution that belongs to the nondominant
solution set which is the nearest (in the sense of Euclidean distance) to the so-called
an ideal point [2, 13].
The best software developer to be chosen after solving the optimization problem
(14) will both minimize the value of the number of software tasks N t (X) which
have incorrect realization during time period [0, t] and minimize the value of the
software development cost K(X) .
Numerical example
188
Analytical method for choosing the best software supplier
In order to illustrate considerations that were described, a simple numerical example will be presented.
Let the set of numbers of potential software developers have the form
I = {1,2,3, 4,5} and quality-cost characterization of potential developers be as in
table 1.
Tab. 1. Quality-cost characterization of developers
i
qi
ki103
1
0,50
5
2
0,80
6
3
0,85
6,5
4
0,90
9,5
5
0,95
12
For the quality-cost parameters that characterize potential developers from Tab.
1. we will solve the bicriterial optimization problem (14).
If we assume that time intervals between task arrivals to the software have exponential distribution G( t )  1  e
 t
,
t  0 , we will obtain
5
N t ( X )  t  x i p i .
i 1
According to numerical values qi , k i ,
lution set has a form
i  I , that were assumed the feasible so-
X = {(1,0,0,0,0),(0,1,0,0,0),(0,0,1,0,0) (0,0,0,1,0),(0,0,0,0,1)} .
Tab. 2. Values of the objective function space
i
Nt(X)
K(X)
1
105,00
5000,00
2
42,00
6000,00
3
31,50
6500,00
4
21,00
9500,00
5
10,50
12000,00
A feasible solution set for this problem is a five-element set of zero-one vectors.
We assume that
  0 ,21 h
1
and t=1000 h. Objective function values for the
feasible solution set are presented in Tab. 2. It is easy to check that the dominant
solution set of the problem (14) is empty (it is a because of objective functions
N t (X) and K(X) that are inverse in the sense of their values). In that case practically recommended approach is to determine a nondominant solution set. It can be
189
Kazimierz Worwa
easy to prove that above set X is a nondominated solution set, i.e. every vector from
X is a nondominated solution of the bicriterial optimization problem (14). According to methodology of solving of multiple objective decision problems in such a
case we can find a compromise solution with reference to some measure of
a distance between the so-called ideal point and particular points of the objective
function space Y [2, 13].
Fig. 1. An illustration of an objective function space (unnormalized Y andnormalized Y )
K(X)
15000,00
1,00
10000,00
0,75
K(X )
0,50
5000,00
Nt(X)
0,25
0,00
0,00
0,00
0,50
1,00
0,00
50,00
100,00
Co-ordinates of the ideal point y  ( y1 , y 2 )  Y are defined as follows:
*
*
*
y1*  min N t ( X ) ,
X X
y *2  min K ( X ) .
X X
In accordance with assumed values of quality-cost parameters, we will have
y  10 ,5 and y *2  5000 .
*
1
In order to narrow down nondominant solution set we will determine a compromise solution of this problem, i.e. such o solution that belongs to the nondominate solution set that is a nearest (in sense Euclidean distance) to the so-called ideal
point[2, 13]. For this reason both objective functions N t (X) and K(X) determined by (12) and (13) respectively, will be normalized by means of the following
formulae [2, 13]:
190
Analytical method for choosing the best software supplier
Nt ( X ) 
K( X ) 
N t ( X )  N tmin ( X )
N tmax ( X )  N tmin ( X )
K ( X )  K min ( X )
,
K max ( X )  K min ( X )
,
(15)
where
N tmin ( X )  min N t ( X ) , , N tmax ( X )  max N t ( X ) (16)
XX
X
K min ( X )  min K ( X ) , , K max ( X )  max K ( X ) (17)
X
XX
where X is a feasible solution set.
Tab. 3. Values of the normalized objective function space
I
1
1,00
0,00
Nt (X )
K(X )
2
0,33
0,14
3
0,22
0,21
4
0,11
0,64
5
0,00
1,00
As a result of the normalization both normalized objective functions N t (X ) and
K(X ) have values belonging to range [0, 1].
It is easy to notice that in the example that is considered the ideal point ( N t (X ) ,
K(X ) ) is of the form ( N t (X ), K(X ))  ( 0 , 0 ) .
X   X can be determined as follows
X   F 1 ( y  ) ,
The searched compromise solution
where point y  ( y1 , y 2 )  Y minimizes the norm



y *  y   min y *  y .
yY
191
Kazimierz Worwa
According
to
numerical
values
which
were
assumed
we
have
y  (0.22, 0.21) and respectively X  (0,0,1,0,0) , and it means that an


optimal variant of the software development is for i=3.
Table 2 presents the results of solving the optimization problem (14). The row
(in bold) that corresponds to the vector is the compromise solution of this problem,
i.e. this is a vector that is the nearest to the ideal point (0, 0), where a distance function
d[( N t ( X ), K ( X ) ), (0, 0)] is of the form
d[( N t ( X ), K( X )),(0, 0)]  [ N t ( X )] 2  [ K( X )] 2 . (18)
It is easy to notice that the vector X
condition

 ( 0 , 0 , 1, 0 , 0 ) complies the following
d[( N t ( X  ), K( X  )),(0, 0)]  min d[( N t ( X ), K( X )),(0, 0)] . (19)
Tab. 4. Compromise solution of the optimization problem (14)
i
X1 X2 X3 X4 X5 N t ( X )N t ( X ) K ( X ) K ( X ) d [( N t ( X ), K ( X )),( 0 ,0 )]
1 1
0
0
0
0
105,0
1,00
5000,0 0,00
1,00
2 0
1
0
0
0
42,0
0,33
6000,0 0,14
0,36
3 0
0
1
0
0
31,5
0,22
6500,0 0,21
0,31
4 0
0
0
1
0
21,0
0,11
9500,0 0,64
0,65
5 0
0
0
0
1
10,5
0,00
12000,0 1,00
1,00
Conclusions
Software development cost and software reliability are the most important
factors in the practice of software engineering. In recent years, the cost of developing software and the penalty cost of software failure have become a major expense
in the whole system. As software projects become larger, the rate of software defects increases geometrically [11]. In response to this problem software developers
have given much attention to the methods of software development and they have
192
Analytical method for choosing the best software supplier
built many tools to support the software development process. The main reason is
developing a software product in such a way that the product will expand at the
right time, at an acceptable cost, and with satisfactory reliability.
Software engineering practice shows that between the cost of software production and the level of its reliability are closely related. If the reliability of the delivered
software is set too high, both delivery time may be too long and software cost and
software cost may be too high. If the reliability availability objective is set too low,
the cost of software is also low but software system may not meet user expectations
because of the low level of its reliability. There is therefore an urgent need to define
the conditions for a rational compromise between the level of software reliability
and the cost of its development.
The paper proposes a formal way of choosing a software developer by formulating and solving the bicriterial optimization problem that will both minimize the
value of the number of software tasks which have incorrect realization during some
time period and minimize the value of the software development cost.
An interpretation of terms “task” and “service” makes it possible to refer considerations that were presented to almost every useful software, including the socalled real time software.
Practical usage of the method of determination of an optimal choice of a software developer which was presented is possible if values qi, ki and distribution
functions Gi(t),iI are known. These values can be estimated by means of methods
that are developed on the base of software reliability theory (see e.g. [4, 8, 10, 11,
12]).
A set of constraints which is used in bicriterial optimization problem (14) can be
changed according to current needs. In particular, that set can be completed by such
constraints which would guarantee that values of both software development cost
and mean value of tasks that are serviced incorrectly would keep within the feasible
values.
The method of choosing a software developer that has been proposed was illustrated by the simple numerical example. For assumed values of quality-cost charac-
Kazimierz Worwa
193
teristics of developers, the best producer has been obtained as a compromise solution of the bicriteria optimization problem (14).
References
[1] Doane, Applied Statistics in Business & Economics, McGraw-Hill, New York, 2006.
[2] Ehrgott M., Multicriteria Optimization, Springer, Berlin-Heidelberg, 2005.
[3] IEEE, Standard Glossary of Software Engineering Terminology, IEEE Standard
610.12.1990.
[4] Kapur P. K., Kumar S., Garg R. B., Contributions to hardware and software reliability,
World Scientific Publishing Co., 1999.
[5] Konopacki G., Worwa K., Uogólnienie modeli niezawodności oprogramowania Shoomana i Jelinskiego-Morandy, Biuletyn WAT, Nr 12, 1984.
[6] Koźniewska I., Włodarczyk M., Modele odnowy, niezawodności i masowej obsługi,
PWN, Warszawa, 1978.
[7] Lisnianski A., Levitin G., Multi-state system reliability: assessment, optimization and
applications, World Scientific Publishing Co., 2003.
[8] Lyu M.R., Handbook of Software Reliability Engineering, McGraw-Hill, New York,
1996.
[9] Malaiya Y.K., Srimani P.K. (eds.), Software Reliability Models: Theoretical Developments, Evaluation and Applications, IEEE Computer Society Press, Los Angeles,
1990.
[10] Musa J., Software reliability engineering: more reliable software, faster and cheaper, Author
House, Bloomington, 2004.
[11] Pham H., System software reliability, Springer-Verlag London Limited, 2006.
[12] Rykov V.V., Mathematical and Statistical Models and Methods in Reliability: Applications to Medicine, Finance, and Quality Control, Springer, 2010.
[13] Stadnicki J., Teoria i praktyka rozwiązywania zadań polioptymalizacji, WNT, Warszawa 2006.
[14] Trechtenberg M., A general theory of software reliability modeling, IEEE Transactions
on Software Engineering. Vo1.39, No. 1, 1990.
194
Analytical method for choosing the best software supplier
Summary
Key words: software reliability, software development, bicriterial optimization problem
A practical problem of choosing a software developer is considered. This problem is investigated
from a user’s viewpoint, i.e. it is assumed that the software which is needed should be not only
reliable but as cheap as possible too. This problem is suggested to be solved by formulating and
solving an appropriate bicriterial optimization problem with some reliability measure and the
development cost of the software as component criteria.
The purpose of this paper is to propose some formal way of determining software developer by
formulating and solving the bicriterial optimization problem which will both minimize the value of
the number of software tasks that have incorrect realization during some time period and minimize
the value of the software development cost. Some numerical example is presented to illustrate practical usefulness of the method which is proposed. The exemplary bicriterial optimization problem is
solved on the base of the general methodology of solving multicriteria optimization problems.
Analityczna metoda wyboru najlepszego dostawcy oprogramowania
Streszczenie
Słowa kluczowe: niezawodność oprogramowania, wytwarzanie oprogramowania, problem
optymalizacji wielokryterialnej
W artykule rozpatruje się problem wyboru najlepszego dostawcy oprogramowania z punktu
widzenia użytkownika tego oprogramowania, któremu zależy na pozyskaniu oprogramowania
o jak najwyższej niezawodności i jednocześnie o jak najniższej cenie. Problem ten proponuje się
rozwiązać z użyciem metod analitycznych, poprzez sformułowanie i rozwiązanie odpowiedniego
zadania optymalizacji dwukryterialnej. Wyznaczone w ten sposób rozwiązanie maksymalizuje
poziom niezawodności oprogramowania i minimalizuje koszt jego wytworzenia. Przedstawiona
metoda wyznaczania najlepszego dostawcy oprogramowania zilustrowana została przykładem
liczbowym. Rozwiązanie sformułowanego zadania optymalizacji dwukryterialnej proponuje się
wyznaczyć w oparciu o ogólnie przyjętą metodykę rozwiązywania zadań polioptymalizacji.
ZESZYTY NAUKOWE
Uczelni Warszawskiej im. Marii Skłodowskiej-Curie
Nr 4 (46) / 2014
Kazimierz Worwa
Military University of Technology, Maria Skłodowska-Curie Warsaw Academy
Gustaw Konopacki
Military University of Technology, Maria Skłodowska-Curie Warsaw Academy
Analysis of the PageRank algorithm
Introduction
The www web service is one of the most popular services offered by the modern
Internet. Access to network resources is implemented mainly through search
engines, functional capabilities of which are still growing. Users generate search
queries by specifying a string of keywords to obtain the result of the action search
as a list of pages containing specified in the query text phrases. Most of the search
engines use familiar, traditional algorithms and information retrieval techniques
developed for viewing relatively small and thematically coherent data sets, such as
for example a collection of catalogs of books in the library. These method shave
proven to be in effective for these arch on the internet, which is huge and contains
much less consistent data, very often changing its content and structure, and spread
over geographically distributed computers. For the purpose of increasing the efficiency of data mining on the Internet it is necessary to improve the existing information retrieval techniques or develop new ones. Studies carried out in order to
estimate the size of the modern Internet shows that it consists of more than one
billion pages. Given that the average size of the page is equal to about 5-10 kilobytes, Internet size is estimated to be tens of terabytes. Internet has a very high
dynamics of changes, both in size and structure. A study conducted by Lawrence
196
Analysis of the PageRank algorithm
and Giles [10] shows that the size of the network has doubled in the past two years,
and the dynamics of the Internet content is very high. Everyday thousands of new
websites are created and existing pages are constantly updated. Research conducted
by Cho and Garcia-Molina [4] show, that about 23% of all the pages available on
the Web is updated daily. Knowledge of the size and structure of the Internet as
well as the methods of its formal modeling is devoted to the ever-growing number
of modern scientific work [4].
There are two main reasons why the traditional information retrieval techniques
may not be sufficiently effective in the exploration of the modern Internet. The first
reason stems from the mentioned above very large size of the Internet and the very
large dynamic changes in its structure and content. The second reason refers to the
existence of multiple systems describing the contents of individual Web pages,
which can significantly impede analysis of their contents. A qualitative change in the
efficiency of search algorithms on the Web was the result of the use of the results in
their design analysis of the structure of links in the network. In particular, a link
from page A to page B can be considered as a recommendation of the page B by
the author of the page A. In recent years some new algorithms have been proposed
based on the knowledge of the structure of Internet links. Practice shows that the
effect of information retrieval algorithms of this class gives qualitatively better results than the results of the algorithms that implement the traditional methods and
techniques of information retrieval.
Internet search engines use a variety of algorithms to sort Web pages based on
their text content or on the hyperlink structure of the Web. This paper describes
algorithms that use the hyperlink structure, called link-based algorithms: PageRank
[12] and HITS [8]. The basic notion for these algorithms is the Web graph, which is
a digraph with a node for each Web page and an arc between pages i and j if there is
a hyperlink from page i to page j. Given a collection of Web pages corresponding to
each other, the HITS and PageRank algorithms construct a matrix capturing the
Web hyperlink structure and compute measures of pages popularity (ranks) using
linear algebra methods.
Kazimierz Worwa, Gustaw Konopacki
197
The idea of the PageRank algorithm
In their well-known study, Brin and Page [3] have proposed an algorithm for
determining the ranking of Web pages called PageRank, which uses the term
“weight of page”. According to this proposal, the weight of page depends on the
number of others Web pages that point to it. The value of the weight can be used to
rank the results of the query. This page rank, however, would be of little resistance
to a phenomenon known as spam, because it is quite easy to artificially create multiple pages pointing to page [1]. To counteract such practices, PageRank algorithm
extends the basic idea of citations, taking into account the importance of each page
that points to the analyzed page. This means that the definition of page weights
(PageRank ) is cyclic: the importance of page depends on the weight of pages pointing to it and at the same time affects the validity of the pages to which it points.
Web model proposed in the work of Brin and Page [3 ] uses the link structure of
Web site to the construction of a Markov chain with transition matrix P, whose
elements are the probabilities pij of random events, so that the user of page i indicates a link to the page j. The irreducibility of the chain guarantees that the long-run
stationary vector r, known as the PageRank vector, exists. Mathematically, we can
think of this network as a graph, where each page is a vertex, and a link from one
page to another is a graph edge. In the language of PageRank, vertices are nodes
(Web pages), the edges from a node are forward links, and the edges into a node are
backlinks.
The PageRank model
We first present a simple definition of PageRank that captures the above
intuition before describing a practical variant.
Let the pages on the Web be denoted by 1, 2, . . . , m. Let N(i) denote the number of forward (outgoing) links from page i. Let B(i) denote the set of pages that
point to page i. For now, we assume that the Web pages form a strongly connected
graph (every page can be reached from any other page). The basic PageRank of
page i, denoted by ri, is a nonnegative real number given by
198
Analysis of the PageRank algorithm
ri 
 r /N(j) ,
jB(i)
j
i  1, 2, ..., m . (1)
The division by N(j) captures the intuition that pages that point to page i evenly
distribute their rank boost to all of the pages they point to. According to this definition, the PageRank of a page depends not only on the number of pages pointing to
it, but also on their importance. The row vector r is called a PageRank vector and
the value ri is the PageRank of page i.
An effective, practical way to find PageRankvectorr isusing the language and
methods of linear algebra. Using the linear algebra, the PageRankvectorr can be
found by solving either the homogeneous linear system
(AT  I)r T  0T , (2)
or by solving the eigenvector problem
r  r  A , (3)
where rT is a column transposed vector to the row vector r, I is the identity ma-
0 T is the column vector of all 0’s, and AT is a transposed matrix of
a square matrix A  [ aij ]mm whare elements aij are defined as follows
 1
if page i points to page j,

(4)
aij   N(i)

otherwise .
 0
trix of order m,
Both formulations are subject to an additional equation, the normalization equation
r 1T  1 ,where 1T is the column vector of all 1’s.
Simple PageRank is well defined only if the link graph is strongly connected,
where a graph is strongly connected when for each pair of nodes (i, j) there is
a sequence of directed edges leading from i to j. One problem with solely using the
Web’s hyperlink structure to build the Markov matrix is apparent. Some rows of the
matrix may contain all zeros. Thus, such a matrix is not stochastic. This occurs
whenever a node contains no outlinks. Many such nodes exist on the Web. In particular, there are two related problems that arise on the real Web: rank sinks and
rank leaks [1]. A group of pages pointing to each other could have some links going
199
Kazimierz Worwa, Gustaw Konopacki
to the group but no links going out form a rank sink. An individual page that does
not have any outlinks constitutes a rank leak. Although, technically, a rank leak is
a special case of rank sink, a rank leak causes a different kind of problem. In case of
a rank sink, nodes not in a sink receive a zero rank, which means we cannot distinguish the importance of such nodes.
Page et al. [12] suggest eliminating these problems in two ways. Firstly, they remove all the leak nodes with out-degree 0. Secondly, in order to solve the problem
of sinks, they introduce a decay coefficient , 0<<1, in the PageRank definition
(1). In this modified definition, only a fraction  of the rank of a page is distributed
among the nodes that it points to. The remaining rank is distributed equally among
all the pages on the Web. Thus, the modified PageRank is [1]:
ri  α
 r /N(j)  (1  α) /m,
jB(i)
j
i  1, 2, ..., m
(5)
where m is the total number of nodes in the graph. Note that basic PageRank (1)
is a special case of (5) that occurs when we take  = 1.
Using matrix A, defined by (4), is insufficient for the PageRank algorithm because the iteration using A alone might not converge properly. It can cycle or the
limit may be dependent on the starting vector. Part of the explanation for this is
that matrix A is not yet necessarily stochastic[6]. For example, if a page is a leak
node, then corresponding row of the matrix A contains all zeros (0).
Thus, to ensure that matrix A is stochastic, we must ensure that every row sums
to 1. It can be proved that from matrix A, we can obtain the stochastic matrix S as
follows [6]:
S  A (bT  1) /m , (6)
where bTis such a column vector

1 if
bi  

0
m
a
j 1
ij
 0 , i . e ., page i is a leak node ,
(7)
otherwise.
where i=1, 2, …, m and 1is a row vector of all 1’s.
Given any stochastic matrix S, we can obtain irreducible matrix G as follows [6]:
200
Analysis of the PageRank algorithm
G  αS  (1  α) E , (8)
where 0<<1,
E = (1T  1) /n and 1T , 1 are, respectively, the column and
row vectors of all 1’s.
Because G is stochastic (i.e., the entries in each column sum to 1), the dominant
eigenvalue of G is 1 [11]. Notice, also, that matrix G is completely positive, i.e. all
elements of G are positive, although the probability of transitioning may be very
small in some cases, it is always nonzero. The irreducibility adjustment insures that
matrix Gis primitive, where a nonnegative, irreducible matrix is primitive if it has
only one eigenvalue on its spectral circle [10]. The matrix irreducibility implies that
the power method will converge to the stationary PageRank vector r. It can be
shown that
r  r G . (9)
Computational aspects of PageRank
Although PageRank can be described using equation (1), the summation method
is neither the most interesting nor the most illustrative of the algorithm’s properties
[1]. The preferable method is to compute the principal eigenvector of the stochastic
and irreducible matrix G defined by (8).
One of the simplest methods for computing the principal eigenvector of a matrix
is called power iteration. In power iteration, an arbitrary initial vector is multiplied
repeatedly with the given matrix, until it converges to the principal eigenvector [6].
The idea of power iteration algorithm to compute the PageRank vector r is given
below [1]:
(1)
s initial vector;
(2)
(3)
rsG;
if r  s   then end; r is the PageRank vector;
(4)
sr;
(5)
goto 2,
where  is the measure of difference of successive iterates and  is predetermined tolerance level (computational accuracy).
201
Kazimierz Worwa, Gustaw Konopacki
In order for the power iteration to be practical, it is not only necessary that it
converge to the PageRank, but that it does so in a few iterations [1]. Theoretically,
the convergence of the power iteration for a matrix depends on the eigenvalue gap,
which is defined as the difference between the modulus of the two largest eigenvalues of the given matrix. Page et al. [12] claim that this is indeed the case, and that
the power iteration converges reasonably fast (practically in no more than in 100
iterations). It is worth noting that in practice we are more interested in the relative
ordering of the pages induced by the PageRank (since this is used to rank the pages)
than the actual PageRank values themselves [1]. Thus, we can terminate the power
iteration once the ordering of the pages becomes reasonably stable. Experiments [7]
indicate that the ordering induced by the PageRank converges much faster than the
actual PageRank.
When dealing with data sets as large as Google uses (more than eight billion web
pages [5]), it is unrealistic to form a matrix G and find its dominant eigenvector. It is
more efficient to compute the PageRank vector using the power method variant,
where we can compute the PageRank vector r in k iteratations, k=1,2,…, with the
matrix A whose elements are defined by (4) instead of matrix G [6]:
r (k)  αr (k 1) A [ (αr (k 1)bT  (1  α)) /m ]  1. (10)
One of the benefits of using the above power method variant to compute the
PageRank vector is the speed with which it converges. Specifically, the power
method on matrix G converges at the rate at which a quantity
α k goes to zero.
This gives the ability to estimate the number of iterations required to reach a tolerance level measured by
r (k)  r (k 1) . The number of needed iterations k is ap-
proximately log  / log  , where is the tolerance level [9].
It is worth noting that the founders of Google, Lawrence Page and Sergey Brin,
use  = 0.85 and find success with only 50 to 100 power iterations [9].
Analysis of the PageRank algorithm effectiveness
Using an iterative algorithm, in practice, according to formula (7) is conditioned
to its efficiency, which in this case is measured by the number of iterations to be
202
Analysis of the PageRank algorithm
done to accuracy that is required for elements of r vector for a fixed value of the
coefficient. The independentparameters ofsimulation experimentswere the number ofWeb pagesandtheir linksand the density ofthese links. In accordance with
what has been said, a network of websites is mapped in the form of a directed graph
without loops, where the arc shows the indication (the link) from one page to another, sa that a they are linked thematically. As a measure of the density of links
between Web pages for the simulation experiments the  coefficient is assumed,
hereafter referred to as the density coefficient adjacency matrix of the Web pages
graph comprising m websites, determined from the following relationship:
m
λ
 N(i)
i 1
m2  m
. (11)
Experiments were performed on randomly generated adjacency matrix with
a predetermined value  coefficient. Due to the limited possibility of presentation
of the results of experiments will be based on most 20 Web pages networks (20
dimensional adjacency matrix), which does not detract from the generality of observations and conclusions.
Experiments conducted to evaluate the effectiveness of an iterative algorithm of
determining the r vector were aimed at:

assessment of the number of iterations of the algorithm and the clarity of
the resulting r vector, depending on  value sat a fixed value of the  coefficient for the Web with a fixed number of pages,

assessment of the number of iterations of the algorithm, depending on the
values of the coefficients  and  for the Web with a fixed number of pages,

assessment of the impact of coefficients  and  for the Web fixed number
of pages on the number of iterations of the algorithm required to achieve r
vector of highest distinctness,

assessment of the impact of the accuracy of determining the elements of the
r vector on the number of iterations of the algorithm.
The research was conducted with the following assumptions:

20 Web pages were considered,
Kazimierz Worwa, Gustaw Konopacki

203
for considered Web the adjacency matrix is a description of a graph without
loops, with the density values  = 0.1.
Fig. 1. Graphs of the PageRank coordinates of r vector for three values
of the coefficient , equal to 0.1, 0.5 and 0.99, respectively
Source: own preparation.
Analysis of the results of the research confirms the supposition of any increased
expressiveness assessment of Web pages by PageRank algorithm with increasing
coefficient, wherein the assessments expressiveness was measured using the wellknown in statistics, the coefficient of variation (ratio of the standard deviation of
the coordinate vector r to their mean value), as:
Vr 
sr
. (12)
r
The values of the variation coefficient of r vector depending on the value of the
coefficient are shows in table 1.
Table 1. The values of the coefficient of variation of PageRank r vector depending on the value of the  coefficient

Vr
Source: own preparation.
0,1
0,0816
0,5
0,3768
0,99
0,7528
204
Analysis of the PageRank algorithm
The desired increase of expressiveness coefficient of the r vector by increasing
the value of the coefficient results in undesirable exponential increase of the number of iterations of the algorithm of calculating the r vector, as shown in Fig. 2.
Figure 2. Plot of thenumber ofiterations ofthe PageRank algorithmin the
process ofdetermining thervector forvalues.
Source: own preparation.
For the experiment, the number L of iterations PageRank algorithm depending
on  values can be estimated with high accuracy by using the following relationship:
L  5,6815  e0,107α (13)
Experiments were performed for adjacency matrices of fixed dimensions (2020)
and changing values of  coefficient ranging from 0.1 to 0.9 in steps of 0.1 and for
fixed values of coefficient. The number of iterations needed to determine the r
vector for the assumed accuracy of its coordinates have been measured. The
results are shown in Table 2.
Table 2 shows that the increase in the value of  coefficient of the adjacency matrix (increasing the number of links between the pages) will reduce the number of
205
Kazimierz Worwa, Gustaw Konopacki
iterations of the PageRank algorithm to determine the r vector desired accuracy for
fixed coefficient. The number of iterations of the algorithm varies exponentially
for rare adjacency matrix ( = 0.1) by changing the linear for the adjacency matrix
of  = 0.5, up to by parabola negative coefficient directional - for a dense matrix,
i.e. for  = 0.9. However, it seems that the actual Web networks are rather rare,
characterized by the values of the coefficient <0.5, therefore, to be expected in
such cases, the exponential increase in the number of iterations of the PageRank
algorithm to achieve the desired r vector with increasing values.
Table 2. The number of iterations of the PageRank algorithm as a function
of the  and  coefficients

 coefficient
0,1
0,2
0,3
0,4
0,5
0,6
0,7
0,8
0,9
0,1
6
5
5
5
4
4
4
4
3
0,2
8
6
6
5
5
5
5
4
4
0,3
10
8
7
6
6
6
5
5
4
0,4
12
9
8
7
7
7
6
5
5
0,5
14
11
9
8
7
7
6
6
5
0,6
18
12
10
9
8
8
7
6
5
0,7
21
15
11
10
9
9
7
7
6
0,8
26
17
12
11
9
9
8
7
6
0,9
34
20
14
12
10
10
8
7
6
0,99
44
24
15
12
11
11
9
8
6
Source: own preparation.
Evaluation of the impact speed for obtaining the highest expressiveness of the r
vector by the algorithm based on the change both the value of the  and coefficients were made indirectly through the distance analysis of r vectors obtained for
different values of the  coefficient from the vector which is characterized by the
greatest expressiveness, i.e. the vector obtained for  = 0.99. Among the known
distance measures between numerical vectors 7 were selected, as the most frequent-
206
Analysis of the PageRank algorithm
ly used in practice: Euclidean, Chebyshev, Manhattan, Pearson, tangents, angular
and exponential module. The research was conducted for the adjacency matrix of
fixed dimensions (2020) and selected values of  coefficient. Fig. 3 shows the
changes in the Euclidean distances between the r vectors and the vector with the
highest expressiveness (for  = 0.99) as a function of the  coefficient for the adjacency matrix of values with coefficient that equals 0.1, 0.5 and 0.9.
Figure 3. Changes of the Euclidean distance of r vectors to the vector with
the greatest distinctness (for α= 0.99) as a function of the α coefficient for the
adjacency matrix with λ coefficient equals to 0.1, 0.5 and 0.9.
Euclidean distance of r vectors from the reference vector r for
lambda = 0.5
= 0.99
lambda = 0.9
Distance values
lambda = 0.1
Values of a coefficient
Source: own preparation.
The waveforms similar to those shown in Fig. 3 were also observed if the distance between r vectors was measured by using the other distance measures. Thus
justified hypothesis that for the rare adjacency matrix ( = 0.1) the approximation
of the r vectors (decreasing distances), calculated for increasing values of  coefficient from the reference vector is much faster than for the denser of adjacency
matrix. Based on the results of the experiment it can be concluded that the dense
adjacency matrix ( = 0.9), the r vector (for small values of  obtaine dusing a small
number of iterations of the investingated algorithm) will be a good approximation
Kazimierz Worwa, Gustaw Konopacki
207
of the high expressiveness r vector, obtained for the high value of  coefficient, but
at the expense of a larger number of iterations. This conclusion night have important practical significance if examined pages ranking algorithm were used in large
networks with highly dynamic changes in the density of the relationship between
the Web pages.
Conclusions
Many of today’s search engines use a two-step process to retrieve pages related
to a user’s query. In the first step, traditional text processing is done to find all documents using the query terms, or related to the query terms by semantic meaning.
This can be done by a lookup into an inverted file, with a vector space method, or
with a query expander that uses a thesaurus. With the massive size of the Web, this
first step can result in thousands of retrieved pages related to the query. To make
this list manageable for a user, many search engines sort this list by a ranking criterion. One popular way to create this ranking is to exploit the additional information
inherent in the Web due to its hyperlinking structure. Thus, link analysis has become the means to ranking. One successful and well-publicized link-based ranking
system is PageRank, the ranking system used by the Google search engine [2].
From the foregoing considerations, it follows that there is a possibility of practical achieve time savings associated with the ranking Web pages, by substituting the
result (page ranking), obtained through the implementation of the PageRank algorithm, by the approximate ranking of these pages, based on the analysis of their
input stages, i.e., the number of appeals from other pages.
References
[1] Arasu A., Cho J., Garcia-Molina H., Paepcke A., Raghavan S. (2001). Searching
the Web. ACM Transactions on Internet technology, 1(1): 2-43.
[2] Blachman N., Fredricksen E. Schneider, F. (2003) How to do Everything with
Google. McGraw-Hill.
[3] Brin S., Page L. (1998). The anatomy of a large-scale hypertextual Web search engine.
Comput. Netw. ISDN Syst. 30: 107–117.
208
Analysis of the PageRank algorithm
[4] Cho J., Garcia-Molina H. (2003), Estimating frequency of change. Journal ACM
Transactions on Internet Technology, 3(3): 256- 90.
[5] Coughran B. (2005), Google’s index nearly doubles, Google Inc.,
http://googleblog.blogspot.com/2004/11/googles-index-nearly-doubles.html.
[6] Golub G., Van Loan C.F. (1989), Matrix Computations. 2nd ed. Johns Hopkins
University Press, Baltimore.
[7] Havelivala T. (1999), Efficient computation of PageRank. Tech. Rep. 1999-31.
Computer Systems Laboratory, Stanford University, Stanford, CA. http://
dbpubs.stanford.edu/ pub/1999-31.
[8] Kleinberg J.M. (1999), Authoritative Sources in a Hyperlinked Environment. Journal
of ACM, 46(5): 604–632.
[9] Langville A.N., Meyer C.D. (2004), The Use of the Linear Algebra by Web Search
Engines, http://meyer.math.ncsu.edu/Meyer/PS_Files/IMAGE.pdf
[10] Lawrence S., Giles C. (1999), Accessibility of information on the web. Nature 400,
107–109.
[11] Meyer C. D. (2000), Matrix Analysis and Applied Linear Algebra. The Society for
Industrial and Applied Mathematics, Philadelphia: 490–693.
[12] Page L., Brin S., Motwani R., Winograd T. (1998), The PageRank Citation Ranking: Bringing Order to the Web. Tech. Rep. Computer Systems Laboratory, Stanford University, Stanford, CA.
Summary
Key words: search engine, crawling, PageRank algorithm
In this paper the challenges in building good search engines are discussed. Many of the search
engines use well-known information retrieval algorithms and techniques. They use Web crawlers to
maintain their index databases amortizing the cost of crawling and indexing over the millions of
queries received by them. Web crawlers are programs that exploit the graph structure of the Web to
move from page to page. Paper analyses the PageRank algorithm one of these Web crawlers. The
results of the impact of the PageRank parameter value on the effectiveness of determining the socalled PageRank vector are considered in the paper. Investigations are illustrated by means of the
Kazimierz Worwa, Gustaw Konopacki
209
results of a some simulation experiments to analyze the PageRank algorithm efficiency for different
density graph (representing analyzed part of www) coefficient values.
Analiza algorytmu PageRank
Streszczenie
Słowa kluczowe: wyszukiwarka internetowa, algorytmy crawlingu, algorytm PageRank
W artykule przedstawiono analizę działania algorytmu PageRank, określającego najbardziej
znaną metodę indeksowania stron internetowych na podstawie odnośników. Algorytm PageRank
jest rozwinięciem od dawna znanej heurystyki, zgodnie z którą jakość dokumentu jest proporcjonalna do liczby dokumentów na niego się powołujących. W odróżnieniu od wcześniej proponowanych rozwiązań, algorytm PageRank analizuje strukturę połączeń, porządkując strony internetowe niezależnie od ich zawartości.
W pracy przeanalizowany został wpływ parametru algorytmu PageRank na skuteczność ustalania wag poszczególnych stron sieci www. Analiza efektywności algorytmu PageRank została
zilustrowana za pomocą wyników niektórych eksperymentów symulacyjnych, przeprowadzonych dla
różnych gęstości sieci.
ZESZYTY NAUKOWE
Uczelni Warszawskiej im. Marii Skłodowskiej-Curie
Nr 4 (46) / 2014
Sergey F. Robotko
Vinnytia Cooperative Institute, Ukraine
The generalized criterion of the assessment
of efficiency and optimization
of stocks in logistic systems
Work purpose. In the last decade industrial production considerably became
complicated, requirements of clients to quality of production and level of service
increased, time of a conclusion of new production for the market was reduced. All
this demanded changes of methodology and technology of management. According
to it was necessary to systematize, on the one hand, approaches to production management, and on the other hand, to accelerate the solution of tasks which face the
enterprise. Their complexity dictated need to remove routine settlement functions
from the person, having allowed concentrating on process of adoption of administrative decisions.
It led to that practically each enterprise to some extent uses system production
managements, and processes, it accompanying. Management of production and
trade stocks which provide rhythm, continuousness and reliability of production is
one of the most important processes which are inherent in any production and
organizational and economic systems. According to it, presently realization of management by supply and production processes in the advanced enterprises is carried
out on the basis of introduction of systems of the MRP I standards (Material Requirements Planning), MRP II (Manufacturing Resource Planning), ERP (Enterprise Resource Planning) [1, 2]. At the same time the cost of these systems is so
212
The generalized criterion of the assessment of efficiency and optimization of stocks in logistic systems
considerable what to ignore it is simply impossible. So, for example, as N. Gaither
[1] notes, the Chevron Company spent about $ 160 million to buy and introduce
ERP – system.
Use of such systems rigidly regulates procedures of management by material
streams in preparatory and production cycles of the enterprises and makes rigid
demands to timeliness and the accuracy of processes of logistics. In such conditions
the optimum organization of management of stocks significantly reduces operational costs of production, product cost, provides stability of production as a whole.
Therefore to problems of modeling of management processes stocks pay essential attention in theoretical and practical activities of researchers and production
workers [3, 4]. Thus when modeling dynamics of stocks in time as criterion of management efficiency average value of expenses of stockpile management is accepted
to a certain period of time which considers different types of costs for the supply
organization.
Such criterion doesn't consider any parameters of process, except cost, and
doesn't give a complete idea of its efficiency.
Especially it concerns cases of realization of management of stocks at multinomenclature mass production when material streams have high intensity and consist
of a significant amount of nomenclatures of raw materials, materials, semi-finished
products and finished goods. In such conditions stocks the price of mistakes and
inaccuracies significantly increases in demand measurement (external and internal),
control of stock rates in a logistic production system, human mistakes which in a
real production situation have casual character. In such situation the criterion of
efficiency of management of stocks has to consider and information component of
a control system stocks and the corresponding processes.
Therefore in this work the task of creation of such criterion of efficiency of
management process by stocks which in a complex form could generalize not only
cost, but also information characteristics of process is set and would allow estimating efficiency of process, as extent of its approach to a certain reference process.
Results of researches. The choice of criteria always is one of the most important stages of the solution of any task, including creation of the automated con-
213
Sergey F. Robotko
trol and management system by stocks of logistic systems. At present time in modern practices of the theory of stockpile management at a choice of criteria of optimization are guided by such provisions according to which the criterion has to [3 –
5]:

really to measure system effectiveness;

to have functional completeness and generality;

to have expression in a quantitative form;

to have sensitivity to the key changing parameters of system;

to provide comparison of different possible options of creation of system.
It is obvious that the criterion of optimization of a control system has to display
the purpose of functioning of system - streamlining of process of creation and
management of production stocks, thus this purpose has to be formulated in the
form of function or functionality from system parameters.
As an indicator of system effectiveness of stockpile management are as usual
used [4]:

the level of service, caused to the relations of number of the delivered
stocks to necessary quantity;

probability of sufficiency of the stock, caused as stationary probability of
satisfaction of demand in any time point;

economic indicators.
However, despite declaring of this wide range of indicators at creation of models
of stockpile management everything is reduced to situation that optimization of
management process by stocks is urged to lower expenses which accompany production process. Therefore at creation of known models of stockpile management
as criterion of optimization of system cumulative costs for creation, replenishment,
storage of stocks and a loss caused by surplus or deficiency of stocks [3 – 5] get out.
Generally this criterion can be presented in the form of functionality:
L  F(Cc , Cn , Cu , Cs , Cl , Co , Cp , Cf , T, t, y) ,
(1)
where – Cc warehouse cost, considering equipment and expense cost according
to the maintenance of the service personnel;
Cn
- the amount of losses from natural departure of stocks at storage;
214
The generalized criterion of the assessment of efficiency and optimization of stocks in logistic systems
Cu - cost of unit of a stock;
Cs - cost of storage of unit of a stock;
Cl - the cost of the organization of delivery of party of the order which, as a
rule, expresses itself consignment notes of expense on registration of documentation and submission of the order;
Co - loss from surplus of stocks in case of impossibility of their realization;
Cp - loss from deficiency of stocks;
Cf - loss from "freezing" of the current assets enclosed in stocks;
T - planning period;
t - current time;
y - the volume of created stocks.
To present functionality (1) with all listed components in shape, convenient for
calculations, as a rule, it isn't possible. Therefore some compound to criterion unites
in one indicator. Besides, depending on a specific considered objective and to the
mathematical model corresponding to it, separate indicators (1) can be excluded
from consideration.
In the theory of stockpile management problems of two types are considered.
Sometimes are explicitly limited to any natural class the politician of management
which depends on a small number of parameters. Then the optimizing task is reduced to creation of algorithms for determination of optimum values of these parameters.
However at such approach there are doubts, whether exists more difficult rule of
submission of orders which is beyond an analyzed class and provides smaller expenses. In certain cases it is possible to resolve these doubts and to allocate conditions under which simple politicians of management will be optimum in global
sense. It means that more any difficult rules of submission of the order don't give
advantage in comparison with the best policy from the found class.
So, models of the theory of stocks allow to analyze influence of managing directors of parameters on dynamics of level of a stock and to minimize expenses which
accompany management of stocks, choosing optimum politicians of management.
To use results of this theory it is necessary:
Sergey F. Robotko

215
to have the mathematical description of demand or an expense of products which stock up in the course of production;

to cause rules of implementation of orders.
If these external conditions are set, it is possible to analyze work, control systems
of stocks, that is, to find distribution of a stock rate, frequency of orders, frequency
and duration of interruptions in supply, a stock rate and so on. Ability to calculate
such characteristics helps to choose acceptable option of policy of management and
value of operating parameters.
At the same time to creation of criteria in existing models some very important
shortcomings connected with not accounting of the following factors are inherent
in such approach:

process of change of a stock rate is stochastic because it is influenced by
many random factors. Therefore the condition of object of control in any
time point has some uncertainty which "is removed" in the course of control;

characteristics of demand can have any character, in particular, nonstationary that compels to the solution of tasks of the analysis and demand
forecasting for each nomenclature;

in the conditions of management of a large number of nomenclatures of
stocks it is necessary to develop rules and algorithms of control stocks, to
choose its optimum moments, etc.
All these processes are characterized by certain stochastic functions, have the
laws of distribution and consequently have uncertainty which characterizes entropy
of management of stocks in concrete logistic system.
Having considered existing approaches to the analysis of management of stocks,
it is possible to draw a conclusion that at creation of models of these processes
researchers are limited to search of value of cumulative expenses of stockpile management in the form of functionality (1). At the same time out of sight remains
questions of an assessment of other indicators of management of stocks, first of all
what are connected with technical realization of processes of control and management of supply processes (management of material streams) in logistic systems. It
216
The generalized criterion of the assessment of efficiency and optimization of stocks in logistic systems
should be noted that a minimum of functionality (1) can provide different in the
structure and technical characteristics of system. It is obvious that the control system of stocks will be "best" (ideal) in that case when it will provide a minimum of
total expenses of stockpile management, to have thus the minimum cost and to
provide optimum algorithms of control and stockpile management. It is clear, that
the real system can come nearer only, in principle, as much as necessary close, to
the ideal. Besides, process of change of a stock rate is stochastic as it is influenced
by many random factors. Therefore the condition of object of control in any time
point has some uncertainty which "is removed" in the course of control.
Summing up stated above, it is possible to draw a conclusion that existing indicators of efficiency of management of stocks characterize only an economic component of process and don't consider its information component. Therefore the question of a qualitative choice of criterion of efficiency and optimization of management process by stocks in logistic systems, need the further decision.
Coordination of the purpose of operation and criterion are the most important
situation which always it is necessary to adhere at a choice of criterion of efficiency
of operation.
The criterion of the greatest average result in practice of research of operations
gained the biggest distribution. It is caused by additively of an indicator of average
result which in certain cases considerably facilitates its calculations. However the
indicator of average result which underlies this criterion doesn't consider in an explicit form necessary result. Besides, orientation to average result is justified at mass
repetition of operation. At single carrying out operation (unique operations, systems) it is inexpedient to use criterion of the greatest average result.
Considering that management of stocks in logistic systems is characterized by
mass repeatability of events (operations), for optimization of such systems it is
expedient to use criterion of the maximum average result. Besides, management of
stocks as it was noted earlier, proceed under the influence of many random factors
which leads to probabilistic distribution of a condition of system. Therefore the
criterion of effective management of stocks has to include a probabilistic component which characterizes information on a condition of a control system of stocks.
217
Sergey F. Robotko
Best of all for this purpose the generalized functional and statistical criterion
(GFSC) approaches by I.V. Kuzmin [6] and derivatives from it partial criteria.
At synthesis of criterion of an assessment of efficiency of process of control and
stockpile management first of all it is necessary that it really characterized efficiency
of management of stocks. The criterion meets this requirement if it characterizes
information ability of a control and management system.
The amount of information which is received by system at control and management for time interval
 t
equals
Ip (t, τ)  H 0 (t, τ)  H(t, τ) ,
(2)
where – H 0 (t, τ) entropy of object and system by the beginning of process of
control and management; H(t, τ) – residual entropy of object and the automated
control and management system for stocks (ACMS) after management process and
control.
Dependence (2) characterizes real process in real system which works in real
time, in real working conditions under the influence of random factors and is characterized by some uncertainty. So (2) allows to calculate information ability of real
system.
The hypothetical potential (ideal) system allows removing all uncertainty of process. Therefore its efficiency from information point of view can be estimated an
indicator:
Iп (t, τ)  H0 (t, τ)
(3)
Efficiency of ACMS from information point of view can be estimated the relation of information ability of real system to information ability of potential system.
Taking into account expressions (2) and (3):
ЕI 
H0 (t, τ)  H(t,τ)
(4)
H0 (t, τ)
218
The generalized criterion of the assessment of efficiency and optimization of stocks in logistic systems
Actually the criterion (4) shows extent of approach of real system to the potential. The more effectively the control and management system, the less residual
entropy
H(t, τ) the closer ЕI to 1 works.
So convenience of criterion Е I in a form (4) consists in its definiteness in relation to extreme conditions of system – for completely certain (potential) system
(process)
Е I =1, for completely uncertain system (process) Е I =0. For any real
system 0 ≤ Е I ≤ 1.
Along with the specified advantage, essential shortcomings are inherent in criterion
Е I also:

it is a static assessment of efficiency which doesn't consider dynamics of
process of control and management;

the criterion doesn't consider complexity and the cost of processes of control and management in ACMS.
To liquidate these shortcomings, it is necessary as it is offered in [6] to estimate
the cost of information ability of potential and real ACMS by reduction them to
average expenses of the organization of processes of control and management in
potential Cп (t, τ) and real Cр (t, τ) systems.
Then for a real control and management system we have the following assessment:
KIр 
Ip (t, τ)
;
Cp (t, τ)
(5)
for potential system
KIп 
Iп (t, τ) . (6)
Cп (t, τ)
Having compared real ASKU with potential (having normalized the first the second), we have [6]:
E(t, τ) 
КIр
. (7)
КIп
219
Sergey F. Robotko
Taking into account (5) and (6) criterion (7) will become:
E(t, τ) 
Ip (t, τ) Cп (t, τ)
. (8)
Iп (t, τ) Cp (t, τ)
At last taking into account expressions (2) and (3) generalized functional and statistical criterion of processes of control and management (8) will become:
E(t, τ) 
{H0 (t, τ)  H(t, τ)}Cп (t, τ)
. (9)
H0 (t, τ) Cp (t, τ)
Its completeness, the clarity, acceptable simplicity is advantage of the generalized
functional and statistical criterion (9). But the most important characteristic (property) of criterion (9) is that it closely and logically connects information ability of a
real control and management system with the cost of the organization of this process and this generalized characteristic is compared normalized) with an ideal, potential case. As a matter of fact, the criterion (9) characterizes limit information and
cost profitability of the operating automated control and management system for
stocks.
For creation of modified GFSC it is necessary to give interpretation of each
component of a formula (9) from the point of view of management of stocks, to
create concept of potential system, to define a criterion procedure of payments.
Let's copy UFSK formula in a look:
E(t, τ) 
{H0 (t, τ)  H(t, τ)}Cп (t, τ)
 Ki  Kc , (10)
H0 (t, τ) Cp (t, τ)
where the coefficient Ki is information component of criterion, and
plays a cost component of criterion.
It is clear, that
Ki 
H0 (t, τ)  H(t, τ) ,
H0 (t, τ)
(11)
Kc dis-
220
The generalized criterion of the assessment of efficiency and optimization of stocks in logistic systems
and
Cп (t, τ)
.
(12)
Cp (t, τ)
From (10) – (12) it is clear, as Ki  1 and Kc  1 . Therefore their work – value
Kc 
of criterion, is in an interval (0, 1). The value of criterion to 0 is closer, the system
worse works, the value of criterion to 1 is closer, and the system is better and the
closer it comes nearer to the potential – a perfect condition of system.
Unlike technical system, logistic system as the integrated organizational and economic system works even then when processes in it are organized unsuccessfully,
negative consequences of it collect is hidden from the user and be shown they can
when correct it it will be impossible. Therefore components to criterion (10) have
to consider a condition of processes in system in their probabilistic manifestation
compared with an ideal case, - when the system at all has no uncertainty and is
completely controllable.
From this point of view the choice of potential system is very important stage of
formation of GFSC. Let's consider this question for a control system of stocks [7].
Considering making elements of models and management of stocks (1) we can
draw a conclusion that the potential (ideal) control system of stocks has to be completely predictable, deficiency of stocks in it is completely excluded, demand for
reserved subjects in such system has to be determined, operating conditions of
potential system are invariable.
Uniform such system is the system which realizes policy of the constant size of
party of delivery of stocks and is described by Wilson's model [3, 5]. According to
this strategy orders of constant volume
intervals of time – the delivery period T 
Q
2 Cl N
move through constant
ΘCs
2 Cl Θ
.
NCs
Then, as shown in [3 – 5], cumulative expenses of stockpile management for all
planned period will make
Sergey F. Robotko
221
Ctotal  2NΘ Cl Cs , (13)
where
N - cumulative demand for the planned period  ;
Cl - expenses of the organization of delivery of party of a stock;
Cs - cost of storage of unit of a stock in unit of time;
Ctotal - cumulative expenses of stockpile management for the planned period of
time  .
Then it is possible to draw a conclusion that the first component of UFSK – the
cost of the organization of process in potential system Cп (t, τ) is determined by a
formula (13).
Cp (t, τ) will be defined by real
operating conditions of system, to differ from Cп (t, τ) and are subject to calculaStockpile management expenses in real system
tion by modeling of processes of functioning of real system.
For calculation of information component of GFSC it is necessary to consider
again a potential control system of stocks as system which is described by Wilson's
model.
From [6] it is known that the maximum entropy of system is reached in a case
when all conditions of system have identical probability. Stocks it is accepted to
understand quantity of units of a stock which is in system [3, 5] as a condition of a
control system. From dynamics of a potential control system stocks visible that this
probability equals:
pi, пот (t, τ) 
1
, i  1, 2, ..., Q , (14)
Q
Then entropy of potential system (14) will be defined with expression by expression:
Q
H0 (t, τ)  
i 1
Entropy of real system
1
1
1
log 2   log 2 . (15)
Q
Q
Q
H(t,τ) is defined by probabilities of conditions of real
system
pi (t, τ)  P(y  i) ,
222
The generalized criterion of the assessment of efficiency and optimization of stocks in logistic systems
that is stationary probabilities of that in any time point in a control system of
stocks the stock rate equals exactly
i
units.
Then taking into account formulas (13) – (15) for a control system of stocks the
generalized functional and statistical criterion (10) will become:
E(t, τ) 
{ log 2
1 y
  p log p }  2NΘ Cl Cs
Q i1 i 2 i
1
 log 2  Cp (t, τ)
Q
.
(16)
Advantages of the generalized functional and statistical criterion of effective
management of stocks in logistic systems in a form (16) are the following:

the criterion is rated. Its calculated value lies in the range from 0 to 1 and
numerical value of criterion shows extent of approach of real system to the
elementary, absolutely counted to a case;

all components of criterion characterizing real management process by
stocks in a real situation, can be defined as result of mathematical modeling
of real process of change of a stock rate in time;

calculated values of criterion (16) for different strategy of stockpile management at a design stage of a control system allow to choose as stocks policy of stockpile management for each real practical situation;

each component of criterion (16) can be used separately, as partial criterion
of system effectiveness.
Thus, association of actually information and cost profitability in the form of the
generalized functional and statistical criterion of efficiency of management of stocks
allows considering, analyzing and optimizing in a complex a control system of
stocks in logistic systems.
Conclusions. Use of the principles of the system analysis when studying functioning of system of material support allows drawing a conclusion that the choice of
criterion of efficiency of such system is one of the main problems of design, optimization and realization of the automated monitoring system and stockpiling management in logistic systems of any nature.
Sergey F. Robotko
223
On the basis of the analysis of management processes stocks received a form of
the generalized functional and statistical criterion of such processes. It is shown that
as potential system it is necessary to apply basic model of stockpile management
which is described by Wilson's model. Taking into account this situation there was a
received settlement formula of criterion which can be calculated for any real situation in logistic system.
Literature
[1] Gaither N., Fraizer G.V. Production and operations management/8th edition. – Cincinnati: S.-W. College Publishing, 1999.
[2] Greene J.H. Production and inventory control handbook. Falls Church. – VA: American Production and Inventory Control Society, 1997.
[3] Хедли Дж., Уайтин Т. Анализ систем управления запасами/ Пер. с англ. ред.
ред. А.Л. Райкина. – М.: Наука, 1969.
[4] Шрайбфедер Дж. Эффективное управление запасами / Пер. с англ. 2-е изд. —
М.: Альпина Бизнес Букс, 2006.
[5] Рыжиков Ю.И. Теория очередей и управления запасами. – Спб.: Питер, 2001.
[6] Кузьмин И. В. Оценка эффективности и оптимизация АСКУ. - М.: Советское
радио, 1971.
[7] Кузьмин И.В., Роботько С.Ф. Анализ и построение критериев оценки
эффективности и оптимизации процессов управления запасами в логистических
системах // Проблеми інформаційних технологій. – 2011. - № 10.
Summary
Key words: logistic system, stockpile management, criterion of efficiency, control, potential system,
stockpile management expenses, control system of stocks
In article the problem of creation criterion of an assessment of efficiency automated monitoring
systems and stockpile management in logistic systems is considered. It is shown that traditional
estimation of management efficiency by stocks by means of an assessment of average expenses of
stockpile management during time is an imperfect assessment because at such approach information
224
The generalized criterion of the assessment of efficiency and optimization of stocks in logistic systems
component of process isn't considered. This component is set by essential uncertainty of management
processes and control of a condition of stocks which is brought in system thanks to the casual processes proceeding in system, errors of control of a stock rate and human mistakes. Proposed model of
the generalized functional-statistical criterion inventory management processes, which take account
of the actual processes stock changes in time and the information capacity of the system of inventory
management.
Uogólnienie kryterium oceny efektywności
i optymalizacji zasobów w systemach logistycznych
Streszczenie
Słowa kluczowe: systemy logistyczne, zarządzanie zasobami, kryterium efektywności, kontrola, potencjał systemu zarządzania zasobami, koszty, system kontroli zasobami
W artykule rozważany jest problem stworzenia kryterium oceny skuteczności systemów kontroli i zarządzania zasobami w systemach logistycznych. Wykazano w nim, że tradycyjna ocena
efektywności zarządzania zasobami w okresie czasu jest niedoskonała. Sporządzono propozycję
modelu uogólnionych procesów funkcjonalnych zarządzania zasobami i kryteria statystyczne, które
uwzględniają rzeczywiste procesy zmian w czasie oraz zdolności informacyjnego systemu zarządzania zasobami.
ZESZYTY NAUKOWE
Uczelni Warszawskiej im. Marii Skłodowskiej-Curie
Nr 4 (46) / 2014
Tomasz Wojciechowski
Military University of Technology
Faculty of Geodesy and Cartography
Control measurements of bridge structures
Introduction
The structure of bridge objects constructed today, as faced with continuous
vehicle, rail and foot traffic, is constantly optimised with regard to costs and durability. Modern construction solutions and assembly methods allow to erect bridges
whose length is limited only by human imagination and economy – nowadays the
character of obstacles plays a secondary role.
Testing of all constructions of this type is of special importance; we all cross
rivers or any other obstacles, be it natural or artificial, almost every day. We rarely, if
ever, stop to think about the condition of the footbridge we cross, or what happens
to it during the time of passage. To allow us the peace of mind of not having to
think about these issues, special services are in place who control the technical
condition of bridge constructions and all types of displacements, by carrying out
post-construction evaluation tests. These tests require the presence of not only
constructors specialised in bridges, but also specialists from many other areas of
science, such as physicists, geotechnicians, hydrologists, etc. Geodesists also play a
very important role in this process. Geodetic measurements allow to define vertical
and horizontal displacement of constructions and therefore to control the shape of
the alignment.
226
Control measurements of bridge structures
The Świętokrzyski Bridge in Warsaw is a cable stayed bridge. The theoretical idea
of constructing cable stayed bridges appeared in the first half of the 17th century;
the first bridge was constructed 150 years later. A real evolution of these constructions did not follow until after the second world war (the year 1956); it continues
until today.
In the case of cable-stayed bridges, spans are linked to a system of straight lines,
which transfer the load to the pylon.
The basic elements composing the construction of a cable stayed bridge are:

supports,

pylons,

gauntry girder,

cable staying system.
Fig. 1. Construction elements of a cable stayed bridge
Cable staying system
Gauntry girder
Supports
Źródło: Biliszczuk J., Mosty Podwieszane - Projektowanie i realizacja, Wydawnictwo Arkady, Warszawa 2005.
Methods of measuring the displacement and deflection of bridge constructions
principally rely on the accessibility of the measured object. In order to define
vertical displacement, including the deflection of spans, the method used most
often is levelling.
The accuracy of measurement depends mainly on the range of deformation of
the tested object, as this determines the spacing between geodetic apparatus position points and measured points located on the object. It can generally be said that
Tomasz Wojciechowski
227
the accuracy of defining vertical displacement varies from 0.1 mm to 1.00 mm when
applying high-precision geometric levelling.
When measuring horizontal displacement, the accuracy of defining components
of displacement is ca. 0.2 mm in relation to the target 50 m. In case of larger spacing of measured points, the accuracy of defining horizontal displacement is lower.
Evaluation of the construction of cable stayed bridges
Evaluation of the construction of cable stayed bridges is a lengthy and complicated process; it involves a large number of measurements not required for other
constructions of this type. In the case of cable stayed bridges, we can distinguish at
least four types of testing and control measurements carried out during the phase of
construction as well as during operation:

monitoring during the construction phase and commissioning tests,

testing of behaviour caused by environmental factors in relation to the
potential limitations of operation or threat to safety of the construction,

cognitive measurements – verification of assumptions and theories appliedduring the design stage,

periodic maintenance checks – control of the technical condition and forecasting of repairs.
Cable stayed bridges are frequently subject to testing during operation, therefore
the aim of these tests is to create an algorithm according to which the construction
of the bridge works. The final aim of post-construction tests is to draw up coherent
guidelines and standards of shaping, designing and controlling cable stayed
constructions.
In places where the climate is mild, where there is no risk of earthquakes, and
climatic threats are inconsiderable, the most important measurements are the ones
related to the evaluation of the technical condition, carried out in order to:

control the degradation process,

locate reasons of any disturbances of operation,

control functionality and safety conditions.
228
Control measurements of bridge structures
Geodetic measurements of the Świętokrzyski Bridge
In order to define values of the tested points situated on the Świętokrzyski
Bridge, we have used eight benchmarks as reference points.
The testing was carried out in three measurement series.
All series of measurements were carried out on Saturdays and Sundays at a time
when traffic was not as heavy as during business days.
As measured points we have assumed benchmarks located in the line of additional under-rail beams numbered from 100 to 153 and 400 to 452, as well as
benchmarks mounted in centres of anchor blocks numbered from 200 to 223 and
300 to 323.
All reference benchmarks as well as controlled benchmarks have been permanently stabilised. Benchmarks numbered: 111.0310 = 998; 111.0510 = 997; 999; 996
are located on the facades of residential apartment buildings. Benchmarks numbered 1000; 1001; 4000; 4001 are located in the bridgeheads (barrier locking screws).
All the controlled benchmarks were installed during the bridge construction process
and have been stabilised permanently (steel bolts melted into bridge construction).
Measurement method
Measurements have been carried out using the accurate levelling method, by
carrying out observations with an accurate electronic leveller Leica Na 3003 with a
set of bar coded precision rods. In each cycle, the levelling was carried out twice in
the main direction and reverse, and measurement results were recorded automatically. Since there are no assumptions related to accuracy of bridge construction measurements, I shall assume the accuracy of high-precision levelling method.
Course of testing
In order to define the vertical displacement of the Świętokrzyski Bridge in Warsaw we used a network of benchmarks located on the deck of the bridge. These
benchmarks mounted during the construction stage by contractors are used for
periodical testing of the bridge structure. The number and location of benchmarks
depends on the length of spans and individual recommendations of constructors.
Tomasz Wojciechowski
229
The Świętokrzyski Bridge has:

153 benchmarks located from the southern side along the rail of the bridge,

152 benchmarks located from the northern side along the rail of the bridge,
and

23 benchmarks located in cable anchoring blocks from the southern side,

23 benchmarks located in cable anchoring blocks from the northern side.
Fig. 2. Benchmark controlled in the centre of anchor blocks
Measurement of vertical displacement of the Świętokrzyski Bridge in Warsaw has
been carried out in three measurement cycles.
Cycle I, treated as reference for the following cycles, has been acknowledged to
be the initial measurement. Measurements for this cycle were started in November.
In order to reduce the effect of vibrations of construction induced by vehicle traffic
and to prevent air vibrations caused by sunlight, measurements were started in the
early hours of the day. Although we have chosen a non-business day for measurements, it was impossible to avoid vehicle traffic on the bridge that caused vibrations
230
Control measurements of bridge structures
(especially public transport vehicle). During the passage of buses, measurements
had to be stopped and continued after the construction “calmed down”, or repeated
numerously. This constituted the biggest problem during the measurements and
was the reason why measurements were spread out over two days in each cycle;
even the initial measurement was carried out at the following weekend after the
levelling of controlled benchmarks. Days when observations were carried out on
the bridge were selected according to weather conditions that enabled trouble-free
work. Air temperature was measured during the tests, and in the two subsequent
days of measurements we noted: November 2006: 8 – 12 degrees Celsius, August
2007: 17-22 degrees Celsius with a light wind, December 2008: ca. 8 degrees Celsius.
Additional reference to benchmarks was carried out in December at 9 degrees Celsius.
Measurements were started from benchmarks located along the rails on the
southern side (from benchmark no. 1000 to benchmark no. 1001), by placing the
apparatus by every other benchmark This way we managed to maintain equal distances“rod – apparatus”. Measurements were carried out using the method of levelling from the centre by carrying out two readings on rods located in the backward
and forward direction, levelling the entire distance twice. For the measurement we
have used a set of high-precision bar coded invar rods, which were stabilised with
tripods for the time of measurement.
The same operations were carried out for subsequent lines of benchmarks
creating a sum total of four levelling sequences (accordingly, benchmarks numbered: 200 – 223, 300 – 323, 4000 – 4001). Additionally to ensure better control of
measurements, and to create a larger number of overtime observations to be levelled, individual levelling lines have been referenced. Benchmarks of the following
numbers have been referenced: 1000 – 4000, 116- 200 – 300 – 416, 132 – 211 – 311
– 432, 151 – 223 – 323 – 450, 1001 – 4001.
The referenced was carried out on the basis of eight benchmarks located outside
the zone of object influence. For this purpose we used two benchmarks of the
national geodetic network and six benchmarks mounted especially for the needs of
control measurements. Below we have presented the shape of reference on the
Tomasz Wojciechowski
231
northern side (from Praga district) and the southern side (Central district Śródmieście) of the bridge. On the northern side the reference was made on the
basis of points 11.0510, 4001, 1001, and on the Southern side - on the basis of
points 111.0310, 4000, 1000, 999, 996. Theoretically, the applied reference system
(base) should be additionally controlled in relation to other fixed benchmarks located further away from the object, in order for the displacement of the bridge not to
influence their level of stability. However, due to the urban conditions we are dealing with in the case of the Świętokrzyski Bridge, control with other benchmarks is
pointless as these benchmarks are located near the main city streets. The street
traffic exerts an influence on the stability of all city benchmarks, therefore in order
to control reference benchmarks we would have to move the levelling outside the
building area.
Levelling of the network, presentation and analysis of the obtained results
In order to obtain a displacement of construction as close to the real value as
possible, it is important to pay special attention to the issue of levelling observations
obtained in subsequent measurement cycles. The correct levelling guarantees that
results obtained during measurements and calculations are most plausible. Levelling
has been carried out with the use of the program Niwel, developed by Ph.D. Eng.
Ryszard Malarski from the Warsaw University of Technology. First of all, initial
levelling was carried out. The purpose of this levelling was to eliminate gross errors
that may frequently arise during numbering of benchmarks, and to define the level
of accuracy of which we can talk about during the following stages of levelling. It
can be said that the primary role of this levelling is to perform a diagnostic function
and allow to introduce possible corrections in the observed material.
232
Control measurements of bridge structures
Fig. 3. Shape of reference, southern side (fragment of the whole)
The primary function of initial levelling is to provide data to begin the process of
identification of reference points (fixed points), on which the entire observation
levelling is based. Initial levelling can be carried out using the method of differences
of observations in the case when we are dealing with identical observation systems
in the initial and current measurements. We then level the differences of observations with consideration to their accuracy characteristics (weight) calculated on the
basis of their counterparts in both measurements. We can weigh in relation to mean
errors or the number of positions. In the case of observations referred to in this
essay, we have used weighing with relation to the number of positions. The result of
this comparison is the mean error of the typical observation and vector of displacement of networks points, called a virtual displacement. After the initial
233
Tomasz Wojciechowski
levelling we move on to identification of reference basis; in the case of observations
carried out on the Świętokrzyski Bridge, we used the method of shared trust range.
Here, we operate with trust ranges for each of potential reference benchmark displacements, obtained as a result of the initial levelling. During the levelling of the
measured network on the Świętokrzyski Bridge it turned out that benchmarks no.
997, 4001, 4000, 1000, 1001, 996, 998, 999 satisfy the stability in the range of
a double mean error of virtual displacement.
`Typical error of observation per position was:
Cycle I – m0=0.44mm
Cycle II – m0=0.94mm
Cycle III – m0=0.17mm.
Wykres przemieszczeń pionowych rp 400-452 (strona północna – “poręcze”)
Wykres przemieszczeń pionowych rp. 400-452 (strona północna - "poręcze")
[mm]
P
R
A
G
451
452
448
449
450
445
446
447
442
443
444
439
440
441
436
437
438
433
434
435
430
427
428
431
432
nr reperów
429
424
425
426
421
422
423
418
419
420
414
415
416
417
411
412
413
407
408
409
410
404
405
406
400
401
402
A
403
Ś
R
Ó
D
M
I
E
Ś
C
I
E
18.00
17.50
17.00
16.50
16.00
15.50
15.00
14.50
14.00
13.50
13.00
12.50
12.00
11.50
11.00
10.50
10.00
9.50
9.00
8.50
8.00
7.50
7.00
6.50
6.00
5.50
5.00
4.50
4.00
3.50
3.00
2.50
2.00
1.50
1.00
0.50
0.00
-0.50
-1.00
-1.50
-2.00
-2.50
-3.00
-3.50
-4.00
-4.50
-5.00
-5.50
-6.00
-6.50
-7.00
-7.50
-8.00
-8.50
-9.00
-9.50
-10.00
-10.50
234
Control measurements of bridge structures
Wykres przemieszczeń pionowych rp. 200-223 (strona południowa „bloki
kotwiące liny”)
Wykres przemieszczeń pionowych rp. 200-223 (strona południowa - "bloki kotwiące liny")
[mm]
18.00
17.50
17.00
16.50
16.00
15.50
15.00
14.50
14.00
13.50
13.00
12.50
12.00
11.50
11.00
10.50
10.00
9.50
9.00
8.50
8.00
7.50
7.00
6.50
6.00
5.50
5.00
4.50
222
223
220
221
218
219
216
217
212
214
215
nr reperów
213
210
A
211
E
G
208
I
A
209
C
-3.00
-3.50
-4.00
-4.50
-5.00
-5.50
-6.00
-6.50
-7.00
-7.50
-8.00
-8.50
-9.00
-9.50
-10.00
-10.50
-11.00
-11.50
-12.00
-12.50
-13.00
-13.50
-14.00
-14.50
-15.00
206
Ś
207
E
R
0.00
-0.50
-1.00
-1.50
-2.00
-2.50
204
I
P
205
M
4.00
3.50
3.00
2.50
2.00
1.50
1.00
0.50
202
D
203
Ó
200
R
201
Ś
From the comparison of observation: cycle I - cycle II - comparison profiles
were carried out for all lines of benchmarks included in the controlled network. The
profiles display displacement in comparison to the first measurement: and thus we
can observe displacement reaching values up to 18 mm. The conducted analysis
shows that the bridge on the side of bridgeheads and in the middle uplifts with
relation to the initial measurement, and is concave between bridgeheads and the
centre.
The above diagrams show a comparison of displacement from three measurement cycles. Cycle II and cycle III are referenced against the initial measurement,
namely cycle I. These diagrams clearly present the displacement that occurs in the
bridge construction. The group of benchmarks with the highest displacement,
reaching up to a few dozen millimetres, are benchmarks no. 116 – 132, 200 – 221,
Tomasz Wojciechowski
235
300 – 307, 314 - 321. Benchmarks, whose height has not changed are those numbered 102, 138, 407, 437, 448. The analysis of obtained displacement results has
proved that the bridge in its entire length is concave. Displacements assume the
highest values in the centre part and fall as they approach bridgeheads. Test results
obtained during the process of evaluation allow to state whether changes of constructions have occurred, and if so, what is the extent of these changes. Results and
the graphic presentation are submitted to the experts on bridge constructions; it
allows to define the causes of displacement and in the long run undertake measures
necessary to prevent the occurrence of such displacement.
Summary and conclusions
Geodetic measurements of bridge constructions constitute a part of control
measurements carried out on engineering objects. They constitute one of the most
significant groups of measurements next to the most technologically advanced
stress measurements, deflections of the construction or cable tension, without
which the technical evaluation of the bridge would not be possible.
1.
Thanks to correctly positioned and measured benchmarks, it is possible to
conduct a construction evaluation by specialists from other fields of engineering. The correct geodetic analysis and the presentation of results allow
to state the condition of a given construction and measures that should be
taken in order to counteract against further possible degradation.
2.
Bearing in mind the small differences of height (of shares of millimetres) on
a large number of benchmarks, these differences can only be 'caught' with
the use of the geometrical high-precision levelling method.
3.
In spite of difficult measurement conditions, namely vibrations of the cable
stayed construction, accurate levelling ensures the best result, by providing
us with a complete shape of alignment and small, but existent, differences in
relation to earlier measurements.
4.
Bar coded level rods with the function of automatic recording of results
allow to reduce the measurement time without reducing the accuracy of observation, hence making the measurement technology faster and more
236
Control measurements of bridge structures
efficient, which is of high significance not only to the measurement team
but also the ordering party.
5.
The obtained results after levelling show the presence of vertical displacement of the Świętokrzyski Bridge in a relatively short period of time (all cycles were carried out within one year) therefore it seems necessary to constantly monitor the object, eg. in quarterly cycles. Therefore the earlier
planned cycles of repeated measurements should be related to the necessity
of putting the object out of use of traffic (for the time of measurement).
6.
The analysis conducted on the basis of levelled measurements shows the
changes that occur in the construction and allows to conduct a further analysis from the constructor's point of view.
7.
The presented method clearly allows to state the presence and location of
changes, giving a clear overview of the displacements.
Bibliography
[1] Prószyński W., Kwaśniak M., Podstawy geodezyjnego wyznaczania przemieszczeń.
Pojęcia i elementy metodyki, Oficyna Wydawnicza PW, 2006.
[2] Bryś H., Przewłocki S., Geodezyjne metody pomiarów przemierzeń budowli, PWN,
1998.
[3] Biliszczuk J., Mosty Podwieszane - Projektowanie i realizacja, Wydawnictwo Arkady
2005.
Summary
Key words: bridges, measurement, monitoring
In this article the author presents different control methods of bridge measurement.
Tomasz Wojciechowski
Kontrolne pomiary mostów
Streszczenie
Słowa kluczowe: mosty, pomiary, monitoring
W artykule autor prezentuje różne metody kontrolnych pomiarów mostów.
237
ZESZYTY NAUKOWE
Uczelni Warszawskiej im. Marii Skłodowskiej-Curie
Nr 4 (46) / 2014
Anna Bielska
Warsaw University of Technology, Faculty of Geodesy and Cartography,
Department of Spatial Planning and Environmental Sciences
Piotr Skłodowski
Maria Skłodowska-Curie Warsaw Academy
Analysis of the factors that affect determination
of the field-forest boundary
Introduction
In order to achieve the optimal use of the earth surface and to arrange agricultural and forest space in line with soil, environmental and landscape conditions, the
term “field-forest boundary” is used in terminology connected with management of
agricultural lands. It is a line that delimitates land contour, which represents prospective agricultural or forestry use of lands [Guidelines, 2003]. Field-forest boundary is one of the most important and readily apparent landscape borders. In natural
environment it separates at least two ecosystems of clearly distinct appearance,
different functioning and use of soils [Ostafin K., 2008]. Forestry sector is territorially and functionally linked to rural areas (which cover more than 93% of the
country) and it constitutes an economic alternative in management of such areas.
The basic function of rural areas is the agricultural function. Nevertheless, contemporary efforts favour multifunctional development, i.e. support of other, nonagricultural use, including increased afforestation [Łupiński W., 2008], [Konieczna
J., 2012]. It should be emphasized that such changes have long-term effects [Fotyga
240
Analysis of the factors that affect determination of the field-forest boundary
J., 2009]. Compared with neighbouring countries and European Union average
(31.1%) afforestation rate in Poland is low (29.3%). Moreover, the rate varies considerably in different regions of Poland and forest complexes are highly fragmented.
The adopted programme for the augmentation of Poland’s forest cover assumes an
increase in forest cover to 30% by 2020 and 33% after 2050 [PAPFC, 2003].
Arrangement of agricultural production space should rely on durable determination of terrain use (agricultural or forestry) that takes into account other social objectives, by setting stable borders of agro-ecological complexes in accordance with
the relevant provisions of law, including, without limitation, the Act of 28 September 1991 on forests (consolidated text Official Journal No. 2011.12.59, as amended). Pursuant to Art. 14 thereof the following may be designated for forestation:
wastelands, arable lands that are not suitable for agricultural production, arable
lands that are not used for agricultural purposes and other lands suitable for forestation, such as:
1.
lands located in spring areas of rivers and streams, in watersheds, along river
banks or near lakes and other water reservoirs;
2.
quicksand and sand dunes;
3.
steep slopes, hillsides, crags and hollows;
4.
slag heaps and areas after depleted sand, gravel, peat or clay deposits.
Pursuant to the Ministry of Agriculture and Rural Development guidelines on
field-forest boundary the following should be included in forest complex: mid-field
woodlots, forest lands, class Rz-VI or R-VI agricultural lands of the 7th soil agricultural suitability complex, class R-V arable lands that do not allow for effective agricultural production and included in the 6th soil agricultural suitability complex, class
Ps-VIz or Ps-VI pasture lands located in areas of low levels of groundwater, immediately adjacent to forest complexes.
The purpose of the research was an analysis of factors that affect the manner of
designing the field-forest boundary. It was assumed that apart from soil conditions,
which undoubtedly significantly influence designation of lands for afforestation,
economic and social factors, which result from lands location, plot structure and
manner of use, are also very important.
241
Anna Bielska, Piotr Skłodowski
Survey area and research method
The research was conducted in five geodetic units located in different communes
(table 1). The investigated areas varied considerably in terms of location, manner of
use and soil conditions.
Table 1. The list of investigated areas
Powiat
County
włocławski
radzyński
nowodworski
nowodworski
nowodworski
Obręb
Geodetic unit
Powierzchnia
[ha]
Surface [ha]
Udział lasów
w ogólnej
powierzchni [%]
Percent content
of forests in the
total area [%]
Udział gleb
najsłabszych[%]
Percent content
of poorest
quality soils [%]
Kanibród
400
0,3
19
Paszki Małe
686
47
8
Błędowo
411
14
41
Nasielsk
Morgi
384
15
25
Nasielsk
Pieścirogi
Nowe
88
0,9
45
Gmina
Commune
Lubień
Kujawski
Radzyń
Podlaski
Pomiechówek
Source: own data.
The following documents were examined:

land and building registration database,

soil agricultural maps at the scale of 1:5,000 together with annexes thereto,

soil classification maps at the scale of 1:5,000,

orthophotomaps,

communal studies of conditions and directions of land use,

local development plans.
On the basis of cartographic and descriptive materials spatial analysis was performed with the use of Quantum GiS software. This analysis allowed for designating the areas suitable for afforestation. With respect to all the geodetic units the
same factors were considered: soil conditions, location, manner of use and plot
structure. On the basis of the conducted analysis, field-forest boundary was sug-
242
Analysis of the factors that affect determination of the field-forest boundary
gested for each unit, which will facilitate further multifunctional sustainable development of the survey area.
Results
Determination of the field-forest boundary in all the investigated areas was preceded by thorough analysis of planning documents: i.e. studies of conditions and
directions of spatial development, as well as of local development plans. This
allowed to identify the directions of possible development of the relevant areas.
In the case of geodetic units where the agricultural function prevails, such as
Kanibród and Paszki Małe, the decisive role was played by soil conditions and the
structure of land use. Kanibród geodetic unit is characterised by actual lack of forest
cover (which amounts to 0.3% of the unit area) and high content of good and
mediocre arable lands (of IIIa, IIIb, IVa and IVb soil valuation classes), which cover
67% of the unit total area (fig. 1). The content of the poorest quality soils (valuation
classes V and VI of the 6th and 7th soil agricultural suitability complexes) was relatively low – 19%. Therefore, small area was designated for afforestation (7.5 ha) in
immediate vicinity of the existing forest complexes. This will facilitate rational use
of the poorest soils of the unit without unnecessary loss of any lands used for agricultural production.
Paszki Małe geodetic unit is located 8.5 km from the centre of Radzyń Podlaski.
However, one-family housing does not develop there and the agricultural-forestry
function prevails. The area is characterised by high forestation rate (47%), low content of weak and very weak soils (8%) and a relatively large range of good quality
soils (43%). As a result of the analysis, only one contour of arable lands designated
for afforestation was determined, with the surface area of 3.7 ha. This includes
lands located in immediate vicinity of forests, as well as lands subject to natural
forest succession (fig. 2).
Anna Bielska, Piotr Skłodowski
243
Fig. 1. Land use and suggested afforestation in Kanibród geodetic unit
Source: own data, prepared on the basis of information from land register and the soil agricultural map.
Błędowo geodetic unit is located 14 km north of Nowy Dwór Mazowiecki and
54 km from Warsaw. In the northern part of the village a natural water reservoir is
located – Błędowskie Lake with a surface area of approximately 9 ha. Eastern border of the village is shaped by the bends of the Wkra River. Due to water erosion,
the river flows in a tunnel valley, which is narrow and characterised by a significant
cross slope, as well as irregular alluvial terraces, which are distinct only in Błędowo
region and thus improve the touristic values of that area. Recreational plots are
located along the river banks, owned mainly by inhabitants of Warsaw. Approximately 40% of the unit surface area was designated for the existing or planned development in the study of conditions and directions of spatial development of Pomiechówek commune, which is further evidenced by division of many construction
244
Analysis of the factors that affect determination of the field-forest boundary
plots (fig. 3). In such a situation forestation was suggested only with respect to areas
(10.2 ha) that are located in immediate vicinity of the existing forests or the river
(flood areas) and which are not designated for development.
Fig. 2. Suggested afforestation in Paszki Małe geodetic unit
Source: own data, prepared on the basis of information from land register and the orthophotomap.
Pieścirogi Nowe and Morgi geodetic units are adjacent to each other and are located, respectively, 3.5 km and 6 km from Nasielsk. However, there is a substantial
difference between them, due to the fact that near the eastern border of Pirogi
Nowe geodetic unit Nasielsk a railway station is located on the Warsaw – Gdańsk
railway line.
Anna Bielska, Piotr Skłodowski
245
Fig. 3. A fragment of Błędowo orthophotomap with indicated plots of land
Source: own data, prepared on the basis of information from land register and the orthophotomap.
This provides convenient connections to other towns and cities (e.g. 58 minutes
to Warsaw), which is of significant importance for development and land management in that unit (fig. 4). Morgi geodetic unit, due to its less favourable location and
better soil conditions (55% of good and mediocre soils of ŁIII, ŁIV, RIIIa, RIIIb,
RIVa and RIVb valuation classes), focuses on agricultural function, although plot
structure thereof is defective. The plots are long (an average of 650 m) and wide (an
average of 25 m), which results in unfavourable proportion of 1:26.
246
Analysis of the factors that affect determination of the field-forest boundary
Fig. 4. Land use in Pieścirogi Nowe and Morgi geodetic units
Source: own data, prepared on the basis of information from land register and the soil agricultural map.
Despite the poor forest cover, no afforestation was suggested for Pieścirogi
Nowe geodetic unit, due to the fact that housing may be developed in the area,
which is evidenced by divisions of plots (fig. 5).
Anna Bielska, Piotr Skłodowski
247
Fig. 5. Plots against the background of soil agricultural suitability complexes
in Pieścirogi geodetic unit
Source: own data, prepared on the basis of information from land register and the soil agricultural map.
In Morgi village afforestation was suggested with respect to lands classified as:
RVI – of the 7th complex, RV – of the 5th complex and of the 9th complex, PsIV
– of the 2nd complex. Admittedly, these are not the poorest soils in the area, but
they constitute a half-enclave with a compact forest complex and the lack of agricultural exploitation thereof resulted in natural forest succession. Unfavourable plot
structure is evidenced by the fact that afforestation concerns over a dozen of plots
of the total surface area of 6.2 ha.
248
Analysis of the factors that affect determination of the field-forest boundary
Discussion and summary
The basis of correct determination of the field-forest boundary is thorough
analysis of the existing situation from the perspective of soil conditions, relief and
landscape elements [Konieczna J., 2012]. This paper focuses on designating lands
for afforestation, which, under specific circumstances, will constitute the optimal
use of land and thus maximise profits, both financial and environmental. It follows
from the research that location (understood in a broader sense), soil conditions,
manner of use and structure of plots are the most important factors that should be
considered in the process of determining the field-forest boundary. No particular
attention was paid to the aspects of landscape or relief, due to the fact that the
research sites were located in lowlands.
Cartographic and descriptive materials were used in order to determine the
essential factors. Special attention should be paid to soil agricultural maps at the
scale of 1:5,000, which, apart from land use and soil valuation class that can also be
found on more updated classification maps [Bielska A., 2012], indicate soil agricultural suitability complex, type and kind of soil, as well as particle size of the whole
soil profile. These data are essential – for example “RVI” symbol on a registry map
indicates solely a very weak arable land but does not explain the reasons for such
classification. Additional information provided by soil agricultural maps may indicate soils formed from sands and permanently dry (6A light and slightly loamy
sands) or, on the contrary, muck soils of permanently excessive moisture content
(9M light and slightly loamy sands). Such information is crucial for the purposes of
selecting appropriate tree species for afforestation. Classification map at the scale of
1:5,000 could also be used, pursuant to which soil agricultural suitability complex
and particle size can, indirectly, be established [Skłodowski P., Bielska A., 2009].
The research proved that in order to determine correct field-forest boundary,
a combination of factors should be comprehensively considered. According to the
authors hereof, two main factors, i.e. location and soil conditions, as well as some
additional factors, i.e. plot structure, land use and landscape aspects [Krysiak S.,
2009], constitute an effective set and should not be considered separately. A direct
link exists between location, quality and agricultural suitability of given real property
Anna Bielska, Piotr Skłodowski
249
and its designation (and, consequently, its value). The research evidences this relationship very clearly. Afforestation of the weakest lands is suggested, if there is no
possibility to introduce an other, more profitable function (e.g. designation for
development). Obviously, it should be made in line with the planning documents
and the existing concept of the relevant area development. It should also be emphasized that, as a rule, designation of lands for afforestation (as well as for development) permanently excludes such lands from agricultural production.
References
[1] Bielska A., Problemy cyfryzacji analogowych map glebowo-rolniczych w skali 1:5000, Acta
Scientiarum Polonorum Administratio Locorum (Gospodarka Przestrzenna) –
No. 11(2) 2012.
[2] Fotyga J., Ochrona użytków zielonych w programach zalesieniowych i jej wpływ na strukturę użytkowania i lesistość w regionie Sudetów, Woda – Środowisko - Obszary Wiejskie, IMUZ, 2009 vol. 9, 4(28).
[3] Konieczna J., Dane katastralne podstawą projektowania granicy rolno-leśnej, Infrastruktura i Ekologia Terenów Wiejskich, No. 3/I/2012, Polish Acadamy of Sciences, Kraków.
[4] PAPFC (Programme for the Augmentation of Poland’s Forest Cover), 2003:
Krajowy Program Zwiększania Lesistości, updated in 2003, Ministry of the Environment,
https://www.mos.gov.pl/g2/big/2009_04/b3ad6cecfb46cc59e76530ba9b9d15
75.pdf[access: 20.01.2014]
[5] Krysiak S., Ekologiczne aspekty przemian krajobrazów wiejskich Polski środkowej na
obszarach występowania osadnictwa turystycznego, Problemy Ekologii Krajobrazu,
2009 vol. XXV.
[6] Łupiński W., Kształtowanie granicy rolno-leśnej jako element planowania przestrzeni na
obszarach wiejskich, Czasopismo Techniczne. Środowisko, R. 105, 2-Ś, 2008,
https://suw.biblos.pk.edu.pl/resources/i1/i3/i3/i0/r1330/LupinskiW_Ksztalt
owanieGranicy.pdf[access: 29.01.2014].
250
Analysis of the factors that affect determination of the field-forest boundary
[7] Ostafin K., Przyrodniczo-krajobrazowy projekt granicy rolno-leśnej w środkowej części
Beskidu Średniego między Skawą a Rabą, Studia i Materiały Centrum Edukacji
Przyrodniczo-Leśnej, R. 10, z. 3 (19)/2008.
[8] Skłodowski P., Bielska A., Potrzeby i metody aktualizacji gleboznawczej klasyfikacji
gruntów, Publishing House of Maria Skłodowska-Curie Warsaw Academy, Faculty of Geodesy and Cartography, Warsaw, 2009.
[9] Wytyczne w sprawie ustalania granicy rolno-leśnej, Guidelines 2003: http://www.
bip.minrol.gov.pl/filerepozytory/filerepozytoryshowimage.aspx?item_id=7228
[access: 20.01.2014].
Summary
Key words: field-forest boundary, afforestation, soil agricultural maps, soil conditions, spatial
planning
In order to achieve the optimal use of the earth surface and to arrange agricultural and forest
space in line with soil, environmental and landscape conditions, the term “field-forest boundary” is
used in terminology connected with management of agricultural lands. It is a line that delimitates
land contour, which represents prospective agricultural or forestry use of lands. The purpose of the
research was the analysis of the factors that affect the manner of designing the field-forest boundary.
It was assumed that apart from soil conditions, which undoubtedly significantly influence designation of lands for afforestation, economic and social factors, which result from lands location, plot
structure and manner of use, are also very important.
Analiza czynników wpływających na projektowanie granicy rolno-leśnej
Streszczenie
Słowa kluczowe: granica rolno-leśna, zalesienia, mapy glebowo-rolnicze, warunki glebowe,
planowanie przestrzenne
W celu optymalnego wykorzystania powierzchni ziemi, uporządkowania przestrzeni rolniczej
i leśnej zgodnego z warunkami glebowymi, przyrodniczymi i krajobrazowymi, w terminologii
Anna Bielska, Piotr Skłodowski
251
związanej z urządzeniem terenów rolnych stosuje się pojęcie granicy rolno-leśnej, linia zamykającej
kontur gruntowy, określający perspektywiczny sposób rolniczego lub leśnego użytkowania gruntów.
Celem badań była analiza czynników wpływających na sposób projektowania granicy rolno-leśnej.
Założono, że oprócz warunków glebowych, które mają niewątpliwie bardzo istotnych wpływ na
przeznaczenie gruntów do zalesienia, niezmiernie ważne są czynniki ekonomiczne i społeczne,
wśród których uwzględniono lokalizację gruntów, strukturę działek oraz strukturę użytkowania.
Recenzenci Zeszytów Naukowych
Uczelni Warszawskiej im. Marii Skłodowskiej-Curie
Reviewers of Scientific Journals
Prof. prof.:
1.
Waldemar Bańka – Wyższa Szkoła im. Pawła Włodkowica w Płocku
2.
Paweł Czarnecki – Wyższa Szkoła Menedżerska w Warszawie
3.
Pavol Dančak – Uniwersytet Preszowski (Słowacja)
4.
Kazimierz Doktór – Wyższa Szkoła Finansów i Zarządzania w Warszawie
5.
Anatolij Drabowskij – Instytut Spółdzielczości w Winnicy (Ukraina)
6.
Rudolf Dupkala – Uniwersytet Preszowski (Słowacja)
7.
Siergiej Gawrow – Moskiewski Miejski Uniwersytet Pedagogiczny (Rosja)
8.
Konstantin Jakimczuk – Instytut Spółdzielczości w Winnicy (Ukraina)
9.
Walery Karsiekin – Kijowski Narodowy Uniwersytet Kultury i Sztuki (Ukraina)
10. Wojciech Maciejewski – Uniwersytet Warszawski
11. Hanna Markiewicz – Akademia Pedagogiki Specjalnej w Warszawie
12. Walery Nowikow – Instytut Demografii i Badań Społecznych Narodowej Akademii Nauk Ukrainy (Ukraina)
13. Alica Petrašova – Uniwersytet Preszowski (Słowacja)
14. Wanda Rusiecka – Białoruska Akademia Nauk (Białoruś)
15. Remigiusz Ryziński – Wyższa Szkoła Informatyki, Zarządzania i Administracji
w Warszawie
16. Wojciech Słomski – Wyższa Szkoła Przedsiębiorczości Międzynarodowej
w Preszowie (Słowacja)
17. Eugeniusz Sobczak – Politechnika Warszawska
18. Marek Storoška – Wyższa Szkoła Przedsiębiorczości Międzynarodowej w Preszowie (Słowacja)
254
Reviewers of Scientific Journals
19. Anna Wawrzonkiewicz-Słomska – Małopolska Wyższa Szkoła Ekonomiczna
w Tarnowie
20. Bolesław Szafrański – Wojskowa Akademia Techniczna
21. Elżbieta Weiss – Wyższa Szkoła Przedsiębiorczości Międzynarodowej w Preszowie (Słowacja)
Informacje dla autorów / Information for Authors
W Zeszytach Naukowych Uczelni Warszawskiej im. Marii Skłodowskiej-Curie publikowane są prace przeglądowe, rozprawy, studia, artykuły, sprawozdania z konferencji
naukowych, komunikaty, recenzje, informacje z zakresu szeroko pojmowanych
nauk społecznych.
Każda praca powinna być przesyłana do redakcji w formie elektronicznej
/e-mail, płyta CD, bądź pendrive/. Należy ją odpowiednio przygotować pod
względem językowym, merytorycznym i technicznym.
Pierwsza strona złożonego w redakcji artykułu powinna zawierać: imię/imiona,
nazwisko/nazwiska autora/autorów, pełną nazwę instytucji, którą zamierza reprezentować autor bądź autorzy, tytuł pracy, jej streszczenie i słowa kluczowe. Każdy
artykuł powinien zawierać także tytuł, słowa kluczowe i streszczenie w języku angielskim.
Wraz z nadsyłanymi artykułami, recenzją czy innymi materiałami do publikacji
redakcja prosi o podanie następujących informacji: adresu korespondencyjnego,
adresu placówki naukowej / zakładu pracy, w której/ym autor/ka jest zatrudniony/a, adresu mailowego i numeru telefonu.
W artykułach zaleca się stosowanie przypisów u dołu strony. Akceptujemy także
zapisy w systemie harvardzkim. Większe opracowania powinny zawierać śródtytuły.
Tytuły tabel i rysunków należy wyakcentować. Źródła do nich podawać jak
w przypisach.
Bibliografię zestawia się alfabetycznie według nazwisk autorów. Każda pozycja
powinna zawierać nazwisko i inicjały imienia/imion autora/autorów, tytuł pracy,
nazwę czasopisma, jego numer lub tom, wydawnictwo bądź adres internetowy,
miejsce i rok wydania. Prosimy także o podawanie numerów ISBN i ISSN.
256
Informacje dla autorów / Information for Authors
Komitet Redakcyjny zastrzega sobie prawo do dokonywania drobnych skrótów
i poprawek przesyłanego materiału bez uzgadniania ich z autorami. W przypadku
konieczności dokonania większych poprawek – dokonuje ich autor lub praca zostaje
wycofana.
*
*
*
Każda praca kierowana jest do recenzji / wzór recenzji i lista recenzentów znajduje się na stronie internetowej czasopisma/. Ocenia ją co najmniej dwóch recenzentów, a w przypadku jednej oceny negatywnej trzech. Recenzenci są spoza jednostki naukowej autora/autorów, a także nie są pracownikami etatowymi Uczelni,
nie pełnią żadnych funkcji w czasopiśmie. W przypadku artykułu zagranicznego
bądź w języku obcym, recenzenci zazwyczaj pochodzą z innego kraju niż autor/autorzy artykułu.
Autorzy nie znają tożsamości recenzentów. Recenzja ma formę pisemną
i zawiera wniosek o dopuszczeniu pracy do druku, dopuszczeniu do druku po naniesieniu uwag lub niedopuszczeniu do druku, a także oświadczenie recenzenta, że
nie ma on żadnych powiązań z autorem/autorami artykułu/artykułów.
*
*
*
In Zeszyty Naukowe Uczelni Warszawskiej im. Marii Skłodowskiej-Curie there are publications of reviewing papers, dissertations, studies, articles, reports of scientific
conferences, announcements, reviews, information on social studies in the broad
sense.
Every paper should be sent to the editorials in an electronic form (e-mail, CD, or
memory stick). It is supposed to be prepared linguistically, technically and in terms
of subject.
The first page of a submitted article should include first name (second name) and
surname(s) of the author(s), full name of the institution represented by the author(s), the title of the paper, its summary and key words. Every article should also
include a title, key words and its summary in English.
Informacje dla autorów / Information for Authors
257
The editors ask authors to send articles, reviews and other materials for publication together with the following information: an address for correspondence, the
address of the institution the author works for, e-mail address(es) and a phone
number.
It is advised to add footnotes at the bottom of a page. Harvard layout system is
also accepted. Broader papers should include subheadings. Titles of tables and
pictures should be highlighted. Their sources should be added as footnotes.
Bibliography is to be listed alphabetically according to the authors’ surnames.
Each entry has to consist of surname, first and second name’s initials of the author,
title of the paper, title of the journal, its issue number or volume number, publishing house or a website, place and year of publication. We also ask authors to write
ISBN and ISSN numbers.
The Editorial Committee feels free to apply minor shortcuts and slight corrections of the sent in materials without asking for the authors’ agreement. If more
significant corrections are necessary, the author is asked to apply them or the paper
is not accepted.
*
*
*
Each paper is sent to reviewers (review form and the list of reviewers are on the
journal’s website). It is assessed by at least two reviewers, and in case of one negative assessment, by three ones. The reviewers are connected neither with the institution of the author nor with the Academy itself on the employment grounds, and
they do not take any positions in the journal. In case of a foreign article, or an article written in a foreign language, reviewers come from a different country than the
author(s) of the article.
Authors do not know the reviewers’ identity. The review is in a written form and
includes a recommendation for sending the paper to print, for sending it to print
after some minor corrections, or not for sending it to print, together with the reviewer’s declaration that he/she has no connections with the author(s) of the article(s).

Podobne dokumenty