Monthly Archives: November 2016
O.M.G Cough Virus Drop By
iPhoneOgraphy – 20 Nov 2016 (Day 325/366)
A throat lozenge (cough drop, troche, cachou, or cough sweet) is a small, typically medicated tablet intended to be dissolved slowly in the mouth to temporarily stop coughs and lubricate and soothe irritated tissues of the throat (usually due to a sore throat), possibly from the common cold or influenza. Cough tablets have taken the name lozenge, based on their original shape, a diamond.
Lozenges may contain benzocaine, an anesthetic, or eucalyptus oil. Non-menthol throat lozenges generally use either zinc gluconate glycine or pectin as an oral demulcent. Several brands of throat lozenges contain dextromethorphan.
Still other varieties, such as Halls, contain menthol, peppermint oil and/or spearmint as their active ingredient(s). Honey lozenges are also available.
Candies to soothe the throat date back to 1000 BC in Egypt’s Twentieth Dynasty, when they were made from honey flavored with citrus, herbs, and spices. In the 19th century, physicians discovered morphine and heroin, which suppress coughing at its source – the brain. Popular formulations of that era included Smith Brothers Cough Drops, first advertised in 1852, and Luden’s, created in 1879. Concern over the risk of opioid dependence led to the development of alternative medications.
The Pedestrian Bridge
iPhoneOgraphy – 19 Nov 2016 (Day 324/366)
A footbridge (also called a pedestrian bridge, pedestrian overpass, or pedestrian overcrossing) is a bridge designed for pedestrians and in some cases cyclists, animal traffic, and horse riders, instead of vehicular traffic. Footbridges complement the landscape and can be used decoratively to visually link two distinct areas or to signal a transaction. In many developed countries, footbridges are both functional and can be beautiful works of art and sculpture. For poor rural communities in the developing world, a footbridge may be a community’s only access to medical clinics, schools and markets, which would otherwise be unreachable when rivers are too high to cross. Simple suspension bridge designs have been developed to be sustainable and easily constructible in such rural areas using only local materials and labor.
An enclosed footbridge between two buildings is sometimes known as a skyway. Bridges providing for both pedestrians and cyclists are often referred to as green-bridges and form an important part of sustainable transport movement towards more sustainable cities. Footbridges are often situated to allow pedestrians to cross water or railways in areas where there are no nearby roads to necessitate a road bridge. They are also located across roads to let pedestrians cross safely without slowing down the traffic. The latter is a type of pedestrian separation structure, examples of which are particularly found near schools, to help prevent children running in front of moving cars.
The residential-scale footbridges all span a short distance and can be used for a broad range of applications. Complicated engineering is not needed and the footbridges are built with readily available materials and basic tools.
Footbridges can also be built in the same ways as road or rail bridges; particularly suspension bridges and beam bridges. Some former road bridges have had their traffic diverted to alternative crossings and have become pedestrian bridges; examples in the UK include The Iron Bridge at Iron bridge, Shropshire, the Old Bridge at Pontypridd and Windsor Bridge at Windsor, Berkshire.
Most footbridges are equipped with guard rails to reduce the risk of pedestrians falling. Where they pass over busy roads or railways, they may also include a fence or other such barrier to prevent pedestrians from jumping, or throwing projectiles onto the traffic below.
Footbridges are small, but important, because they are usually presented in townscape. The appearance of footbridges, and indeed of any other bridges, in a town, is a major concern for designers. People have to live with these structures, usually seeing them every day. In the towns that are big on architectural or scenic interest, conflicting demands may arise and bridges will be built.
Footbridges, in fact, can be elegant or beautiful, and are built on a more human scale than large road and railway bridges. Railway footbridges tend to be somewhat utilitarian, which were present in the earlier years. Apart from those in stations and in towns, they are generally not much seen, even by the passengers who go under them.
Juicy Watermelon
iPhoneOgraphy – 18 Nov 2016 (Day 323/366)
Watermelon (Citrullus lanatus var. lanatus, family Cucurbitaceae) is a vine-like (scrambler and trailer) flowering plant originally from Southern Africa. It is a large, sprawling annual plant with coarse, hairy pinnately-lobed leaves and white to yellow flowers. It is grown for its large edible fruit, also known as a watermelon, which is a special kind of berry with a hard rind and no internal division, botanically called a pepo. The fruit has a smooth hard rind – usually green with dark green stripes or yellow spots – and a sweet, juicy interior flesh – usually deep red to pink, but sometimes orange, yellow, or white – with many seeds, which can be soft and white or hard and black.
Considerable breeding effort has been put into disease-resistant varieties and into developing a “seedless” strain with only digestable white seeds. Many cultivars are available, producing mature fruit within 100 days of planting the crop. The fruit can be eaten raw, pickled or the rind cooked.
The watermelon is thought to have originated in Southern Africa, where it is found growing wild. It reaches maximum genetic diversity there, with sweet, bland and bitter forms. In the 19th century, Alphonse de Candolle considered the watermelon to be indigenous to tropocal Africa. Citrullus colocynthis is often considered to be a wild ancestor of the watermelon and is now found native in north and west Africa. However, it has been suggested on the basis of chloroplast DNA investigations that the cultivated and wild watermelon diverged independently from a common ancestor, possibly C.ecirrhosus from Namibia.
Evidence of its cultivation in the Nile Valley has been found from the second millennium BC onward. Watermelon seeds have been found at Twelfth Dynasty sites and in the tomb of PharaohTutankhamun.
In the 7th century watermelons were being cultivated in India, and by the 10th century had reached China, which is today the world’s single largest watermelon producer. Moorish invaders introduced the fruit into Europe and there is evidence of it being cultivated in Córdoba in 961 and also in Seville in 1158. It spread northwards through Southern Europe, perhaps limited in its advance by summer temperatures being insufficient for good yields. The fruit had begun appearing in European herbals by 1600, and was widely planted in Europe in the 17th century as a minor garden crop.
European colonists and slaves from Africa introduced the watermelon to the New World. Spanish settlers were growing it in Florida in 1576, and it was being grown in Massachusetts by 1629, and by 1650 was being cultivated in Peru, Brazil and Panama, as well as in many British and Dutch colonies. Around the same time, Native Americans were cultivating the crop in the Mississippi valley and Florida. Watermelons were rapidly accepted in Hawaii and other Pacific islands when they were introduced there by explorers such as Captain James Cook.
The Paper Guillotine
iPhoneOgraphy – 17 Nov 2016 (Day 322/366)
A paper cutter (also referred to as paper trimmer, paper guillotine or simply a guillotine) is a tool often found in offices and classrooms, designed to cut a large set of paper at once with a straight edge.
Paper cutters, similar to those of today, were patented in 1844 and 1852 by Guillaume Massiquot. They have been around since the late 1830s, when, in 1837, Thirault built a model with a fixed blade to a flat surface. Since the middle of the 19th century considerable improvements have been made by Fomm and Krause of Germany, Furnival in England, and Oswego and Seybold in the United States.
Paper cutters vary in size, usually from about 30 centimetres (1 ft) in length on each side for office work to 841 millimetres (33.1 in) (an edge of A1 paper) in design workshops. The surface will usually have a grid either painted or inscribed on it, often in half-inch increments, and may have a ruler across the top. At the very least, it must have a flat edge against which the user may line up the paper at right-angles before passing it under the blade. It is usually relatively heavy, so that it will remain steady while in use.
On the right-hand edge is a long, curved steel blade, often referred to as a knife, attached to the base at one corner. Larger versions have a strong compression coil spring as part of the attachment mechanism that pulls the knife against the stationary edge as the knife is drawn down to cut the paper. The other end of the knife unit is a handle. The stationary right edge of the base is also steel, with an exposed, finely-ground edge. When the knife is pulled down to cut paper, the action resembles that of a pair of scissors, only instead of two knives moving against each other, one is stationary. The combination of a blade mounted to a steady base produces clean and straight cuts, the likes of which would have otherwise required a ruler and razor blade to achieve on a single page. Paper cutters are also used for cutting thin sheet metal, cardboard, and plastic. The blade on a paper cutter is made of steel which makes it almost impossible to break.
A variant design uses a wheel-shaped blade mounted on a sliding shuttle attached to a rail. This type of paper cutter is known as a rotary paper cutter. Advantages of this design include being able to make wavy cuts, perforations or just score the paper without cutting, with the use of various circular blades. It is also almost impossible for the user to cut him/herself, except while changing the blade. This makes it safer for home use. Higher-end versions of rotary paper cutters are used for precision paper cutting and are popular for cutting down photographs.
An even simpler design uses double-edged blades which do not rotate, but cut like a penknife. While cheaper, this design is not preferable for serious work due to its tendency to tear paper, and poor performance with thick media.
Most paper cutters come equipped with a finger guard to prevent users from accidentally cutting themselves or severing a digit while using the apparatus. However, injuries are still possible if the device is not used with proper care or attention.
The Fire Chicken
iPhoneOgraphy – 16 Nov 2016 (Day 321/366)
Buldak is a Korean dish made from heavily spiced chicken. The term “bul” is Korean for “fire” and “dak” translates to “chicken.” A decade ago, buldak became famous for its extreme spiciness. Even some Koreans are unable to eat buldak for this reason.
South Korea’s long term recession and economic downturn made people seek spicy food in order to relieve stress. Buldak was invented by Fuyuan Foods, which first registered Buldak at a patent office around 2000. In April 2008, however, with the expiration of the original patent, the name Buldak became free for public use. There used to be only one chain of restaurants that served Buldak but now there are many more. Famous Buldak restaurants are Hongcho Buldak, Hwarang Buldak, and Hwaro Buldak. Buldak has also become somewhat prominent in supermarkets, with brands such as Samyang Food creating Buldak-flavored ramen. Buldak has led to the development of other dishes inspired from it. In recent years, however, its popularity has somewhat declined.
Hokkaido Baked Cheese Tarts
iPhoneOgraphy – 15 Nov 2016 (Day 320/366)
Dessert is a course that concludes a main meal. The course usually consists of sweet foods and beverages, such as dessert wine or liqueurs, but may include coffee, cheeses, nuts, or other savory items. In some parts of the world, such as much of central and western Africa, there is no tradition of a dessert course to conclude a meal.
The term “dessert” can apply to many confections, such as cakes, tarts, cookies, biscuits, gelatins, pastries, ice creams, pies, puddings, custards, and sweet soups. Fruit is also commonly found in dessert courses because of its naturally occurring sweetness. Some cultures sweeten foods that are more commonly savory to create desserts.
The word “dessert” originated from the French word desservir, meaning “to clear the table.” Its first known use was in 1600, in a health education manual entitled Naturall and artificial Directions for Health, which was written by William Vaughan. In his A History of Dessert (2013), Michael Krondl explains it refers to the fact dessert was served after the table had been cleared of other dishes. The term dates from the 14th century but attained its current meaning around the beginning of the 20th century when “service à la française” (setting a variety of dishes on the table at the same time) was replaced with “service à la russe” (presenting a meal in courses.)”
Sweets were fed to the gods in ancient Mesopotamia, India and other ancient civilizations. Dried fruit and honey were probably the first sweeteners used in most of the world, but the spread of sugarcane around the world was essential to the development of dessert.
Sugarcane was grown and refined in India before 500 BCE and was crystallized, making it easy to transport, by 500 CE. Sugar and sugarcane were traded, making sugar available to Macedonia by 300 BCE and China by 600 CE. In South Asia, the Middle East and China, sugar has been a staple of cooking and desserts for over a thousand years. Sugarcane and sugar were little known and rare in Europe until the twelfth century or later, when the Crusades and then colonialization spread its use.
Europeans began to manufacture sugar in the Middle Ages, and more sweet desserts became available. Even then sugar was so expensive usually only the wealthy could indulge on special occasions. The first apple pie recipe was published in 1381. The earliest documentation of the term cupcake was in “Seventy-five Receipts for Pastry, Cakes, and Sweetmeats” in 1828 in Eliza Leslie’s Receipts cookbook.
The Industrial Revolution in America and Europe caused desserts (and food in general) to be mass-produced, processed, preserved, canned, and packaged. Frozen foods became very popular starting in the 1920s when freezing emerged. These processed foods became a large part of diets in many industrialized nations. Many countries have desserts and foods distinctive to their nations or region.
Japanese Dolls “Kokeshi”
iPhoneOgraphy – 14 Nov 2016 (Day 319/366)
Kokeshi (こけし こけし, kokeshi), are Japanese dolls, originally from northern Japan. They are handmade from wood, have a simple trunk and an enlarged head with a few thin, painted lines to define the face. The body has a floral design painted in red, black, and sometimes yellow, and covered with a layer of wax. One characteristic of kokeshi dolls is their lack of arms or legs. The bottom is typically marked with the signature of the artist.
The origin and naming of kokeshi is unclear, with historical ateji spellings including 小芥子, 木牌子, 木形子, and 木芥子. The hiragana spelling こけし was agreed on at the All-Japan Kokeshi Exhibition (全国こけし大会) at Naruko Onsen in August 1939. A plausible theory is that “kokeshi” is derived from wooden (木 ki, ko) or small (小 ko), and dolls (芥子keshi).
Kokeshi were first produced by kijishi (木地師), artisans proficient with a potter’s wheel, at the Shinchi Shuraku, near the Tōgatta Onsen in Zaō from where kokeshi-making techniques spread to other spa areas in the Tōhoku region. It is said that these dolls were originally made during the middle of the Edo period (1600–1868) to be sold to people who were visiting the hot springs in the north-east of the country.
Kokeshi dolls have been used as an inspiration for the style of Nintendo’s digital avatars, called “Miis”, which are created and customized by players. Their appearance has become the symbol of the platform’s overall aesthetic.
“Traditional” kokeshi (伝統こけし dentō-kokeshi) dolls’ shapes and patterns are particular to a certain area and are classified under eleven types, shown below. The most dominant type is the Naruko variety originally made in Miyagi Prefecture, which can also be found in Akita, Iwate, and Yamagata Prefectures. The main street of the Naruko Onsen Village is known as Kokeshi Street and has shops which are operated directly by the kokeshi carvers.
“Creative” kokeshi (新型こけし shingata-kokeshi) allow the artist complete freedom in terms of shape, design and color and were developed after World War II (1945). They are not particular to a specific region of Japan and generally creative kokeshi artists are found in cities.
The woods used for kokeshi vary, with cherry used for its darkness and dogwood for its softer qualities. Itaya-kaede, a Japanese maple, is also used in the creation of both traditional and creative dolls. The wood is left outdoors to season for one to five years before it can be used.
Special Operation Force
iPhoneOgraphy – 13 Nov 2016 (Day 318/366)
A commando is a soldier or operative of an elite light infantry or special operations force often specializing in amphibious landings, parachuting or abseiling.
Originally “a commando” was a type of combat unit, as opposed to an individual in that unit. In other languages, commando and kommando denote a “command”, including the sense of a military or an elite special operations unit.
In the militaries and governments of most countries, commandos are distinctive in that they specialize in assault on unconventional high-value targets. However, the term commando is sometimes used in relation to units carrying out the latter tasks (including some civilian police units).
In English, occasionally to distinguish between an individual commando and the unit Commando, the unit is capitalized.
The word stems from the Afrikaans word kommando, which translates roughly to “mobile infantry regiment”. This term originally referred to mounted infantry regiments, who fought against the British Army in the first and second Boer Wars.
The Dutch word has had the meaning of “a military order or command” since at least 1652; it likely came into the language through the influence of the Portuguese word commando (meaning “command”). (In Dutch, “commando” can also mean a command given to a computer, e.g., “het mkdir-commando” (= “create a directory”).) It is also possible the word was adopted into Afrikaans from interactions with Portuguese colonies. Less likely, it is a High German loan word, which was borrowed from Italian in the 17th century, from the sizable minority of German settlers in the initial European colonization of South Africa.
The officer commanding an Afrikaans kommando is called a kommandant, which is a regimental commander equivalent to a lieutenant-colonel or a colonel.
The Oxford English Dictionary ties the English use of the word meaning “a member of a body of picked men …” directly into its Afrikaans’ origins:
“1943 Combined Operations (Min. of Information) i. Lt. Lieutenant-Colonel D. W. Clarke… produced the outline of a scheme…. The men for this type of irregular warfare should, he suggested, be formed into units to be known as Commandos…. Nor was the historical parallel far-fetched. After the victories of Roberts and Kitchener had scattered the Boer army, the guerrilla tactics of its individual units (which were styled ‘Commandos’)… prevented decisive victory…. His [sc. Lt.-Col. D. W. Clarke’s] ideas were accepted; so also, with some hesitation, was the name Commando.”
During World War II, newspaper reports of the deeds of “the commandos” led to readers thinking that the singular meant one man rather than one military unit, and this new usage became established.
After the Dutch Cape Colony was established in 1652, the word was used to describe bands of militia. The first “Commando Law” was instated by the original Dutch East India Company chartered settlements and similar laws were maintained through the independent Boer Orange Free State and South African Republic. The law compelled Burghers to equip themselves with a horse and a firearm when required in defense. The implementation of these laws was called the “Commando System”. A group of mounted militiamen were organized in a unit known as a commando and headed by a Commandant, who was normally elected from inside the unit. Men called up to serve were said to be “on commando”. British experience with this system lead to the widespread adoption of the word “commandeer” into English in the 1880s.
During the “Great Trek”, conflicts with Southern African peoples such as the Xhosa and the Zulu caused the Boers to retain the commando system despite being free of colonial laws. Also, the word became used to describe any armed raid. During this period, the Boers also developed guerrilla techniques for use against numerically superior but less mobile bands of natives such as the Zulu who fought in large complex formations.
In the First Boer War, Boer commandos were able to use superior marksmanship, fieldcraft, camouflage and mobility to expel an occupying British force (poorly trained in marksmanship, wearing red uniforms and unmounted) from the Transvaal. These tactics were continued throughout the Second Boer War. In the final phase of the war, 75,000 Boers carried out asymmetric warfare against the 450,000-strong British Imperial forces for two years after the British had captured the capital cities of the two Boer republics. During these conflicts the word entered English, retaining its general Afrikaans meaning of a “militia unit” or a “raid”. Robert Baden-Powell recognised the importance of fieldcraft and was inspired to form the scouting movement.
In 1941, Lieutenant-Colonel D. W. Clarke of the British Imperial General Staff, suggested the name Commando for specialized raiding units of the British Army Special Service in evocation of the effectiveness and tactics of the Boer commandos. During World War II, American and British publications, confused over the use of the plural “commandos” for that type of British military units, gave rise to the modern common habit of using “a commando” to mean one member of such a unit, or one man engaged on a raiding-type operation.
The Crystal Chandelier
iPhoneOgraphy – 12 Nov 2016 (Day 317/366)
A chandelier is a decorative ceiling-mounted light fixture. Chandeliers are often ornate, and normally use lamps. Crystal chandeliers have more or less complex arrays of crystal prisms to illuminate a room with refracted light. Chandeliers are often located in hallways, living rooms, and recently in bathrooms.
The word chandelier was first known in the English language in the 1736, borrowed from the Old French word chandelier, which comes from the Latin candelabrum.
The earliest candle chandeliers were used by the wealthy in medieval times, this type of chandelier could be moved to different rooms. From the 15th century, more complex forms of chandeliers, based on ring or crown designs, became popular decorative features in palaces and homes of nobility, clergy and merchants. Its high cost made the chandelier a symbol of luxury and status.
By the early 18th century, ornate cast ormolu forms with long, curved arms and many candles were in the homes of many in the growing merchant class. Neoclassical motifs became an increasingly common element, mostly in cast metals but also in carved and gilded wood. Chandeliers made in this style also drew heavily on the aesthetic of ancient Greece and Rome, incorporating clean lines, classical proportions and mythological creatures. Developments in glassmaking later allowed cheaper production of lead crystal, the light scattering properties of which quickly made it a popular addition to the form, leading to the crystal chandelier.
During the 18th century glass chandeliers were produced by Bohemiens and Venetian glassmakers who were both masters in the art of making chandeliers. Bohemian style was largely successful across Europe and its biggest draw was the chance to obtain spectacular light refraction due to facets and bevels of crystal prisms. As a reaction to this new taste Italian glass factories in Murano created new kinds of artistic light sources. Since Murano glass was not suitable for faceting, typical work realized at the time in other countries where crystal was used, venetian glassmakers relied upon the unique qualities of their glass. Typical features of a Murano chandelier are the intricate arabeques of leaves, flowers and fruits that would be enriched by coloured glass, made possible by the specific type of glass used in Murano. This glass they worked with was so unique, as it was soda glass (famed for its extraordinary lightness) and was a complete contrast to all different types of glass produced in the world at that time. An incredible amount of skill and time was required to precisely twist and shape a chandelier. This new type of chandelier was called “ciocca” literally bouquet of flowers, for the characteristic decorations of glazed polychrome flowers. The most sumptuous of them consisted of a metal frame covered with small elements in blown glass, transparent or colored, with decorations of flowers, fruits and leaves, while simpler model had arms made with a unique piece of glass. Their shape was inspired by an original architectural concept: the space on the inside is left almost empty since decorations are spread all around the central support, distanced from it by the length of the arms. One of the common use of the huge Murano Chandeliers was the interior lighting of theatres and rooms in important palaces.
In the mid-19th century, as gas lighting caught on, branched ceiling fixtures called gasoliers (a portmanteau of gas and chandelier) were produced, and many candle chandeliers were converted. By the 1890s, with the appearance of electric light, some chandeliers used both gas and electricity. As distribution of electricity widened, and supplies became dependable, electric-only chandeliers became standard. Another portmanteau word, electrolier, was formed for these, but nowadays they are most commonly called chandeliers. Some are fitted with bulbs shaped to imitate candle flames, for example those shown below in Epsom and Chatsworth, or with bulbs containing a shimmering gas discharge.
The world’s largest English Glass chandelier, (Hancock Rixon & Dunt and probably F. & C. Osler) is located in the Dolmabahçe Palace in Istanbul. It has 750 lamps and weighs 4.5 tons. Dolmabahçe has the largest collection of British and Baccarat crystal chandeliers in the world, and one of the great staircases has balusters of Baccarat crystal.
More complex and elaborate chandeliers continued to be developed throughout the 18th and 19th centuries, but the widespread introduction of gas and electricity had devalued the chandelier’s appeal as a status symbol.
Toward the end of the 20th century, chandeliers were often used as decorative focal points for rooms, and often did not illuminate.
A Place I Call Paradise
iPhoneOgraphy – 11 Nov 2016 (Day 316/366)
A seaside resort is a resort town or resort hotel, located on the coast. Sometimes it is also an officially accredited title, that is only awarded to a town when the requirements are met (like the title Seebad in Germany).
Where a beach is the primary focus for tourists, it may be called a beach resort.
The coast has always been a recreational environment, although until the mid-nineteenth century, such recreation was a luxury only for the wealthy. Even in Roman times, the town of Baiae, by the Tyrrhenia Sea in Italy, was a resort for those who were sufficiently prosperous Mersea Island, in Essex, England was a seaside holiday destination for wealthy Romans living in Colchester.
The development of the beach as a popular leisure resort from the mid-19th century was the first manifestation of what is now the global tourist industry. The first seaside resorts were opened in the 18th century for the aristocracy, who began to frequent the seaside as well as the then fashionable spa towns, for recreation and health. One of the earliest such seaside resorts was Scarborough in Yorkshire during the 1720s; it had been a popular spa town since a stream of acidic water was discovered running from one of the cliffs to the south of the town in the 17th century. The first rolling bathing machine were introduced by 1735.
In 1793, Heiligendamm in Mecklenburg, Germany was founded as the first seaside resort of the European continent, which successfully attracted Europe’s aristocracy to the Baltic Sea.
The opening of the resort in Brighton and its reception of royal patronage from King George IV extended the seaside as a resort for health and pleasure to the much larger London market, and the beach became a centre for upper-class pleasure and frivolity. This trend was praised and artistically elevated by the new romantic ideal of the picturesque landscape; Japan Austen’s unfinished novel Sanditon is an example of that. Later, Queen Victoria’s long-standing patronage of the Isle of Wight and Ramsgate in Kent ensured that a seaside residence was considered as a highly fashionable possession for those wealthy enough to afford more than one home.
The extension of this form of leisure to the middle and working class began with the development of the railways in the 1840s, which offered cheap and affordable fares to fast growing resort towns. In particular, the completion of a branch line to the small seaside town Blackpool from Poulton led to a sustained economic and demographic boom. A sudden influx of visitors arriving by rail provided the motivation for entrepreneurs to build accommodation and create new attractions, leading to more visitors and a rapid cycle of growth throughout the 1850s and 1860s.
The growth was intensified by the practice among the Lancashire cotton mill owners of closing the factories for a week every year to service and repair machinery. These became known as wakes weeks. Each town’s mills would close for a different week, allowing Blackpool to manage a steady and reliable stream of visitors over a prolonged period in the summer. A prominent feature of the resort was the promenade and the pleasure piers, where an eclectic variety of performances vied for the people’s attention. In 1863, the North Pier in Blackpool was completed, rapidly becoming a centre of attraction for elite visitors. Central Pier was completed in 1868, with a theatre and a large open-air dance floor.
Many popular beach resorts were equipped with bathing machines because even the all-covering beachwear of the period was considered immodest.
By the end of the century the English coastline had over 100 large resort towns, some with populations exceeding 50,000.
The development of the seaside resort abroad was stimulated by the well developed English love of the beach. The French Riviera alongside the Mediterranean had already become a popular destination for the British upper class by the end of the 18th century. In 1864, the first railway to Nice was completed, making the Riviera accessible to visitors from all over Europe. By 1874, residents of foreign enclaves in Nice, most of whom were British, numbered 25,000. The coastline became renowned for attracting the royalty of Europe, including Queen Victoria and King Edward VII.
Continental European attitudes towards gambling and nudity tended to be more lax than in Britain, and British and French entrepreneurs were quick to exploit the possibilities. In 1863, the Prince of Monaco, Charles III and François Blanc, a French businessman, arranged for steamships and carriages to take visitors from Nice to Monaco, where large luxury hotels, gardens and casinos were built. The place was renamed Monte Carlo.
Commercial seabathing also spread to the United States and parts of the British Empire such as Australia, where surfing became popular in the early 20th century. By the 1970s cheap and affordable air travel was the catalyst for the growth of a truly global tourism market which benefited areas with a sunny climate, such as the mediterranean coasts of Spain, Italy and southern France.
Shot & Edited using iPhone 6+















