JavaScript Start-up Performance

As web developers, we know how easy it is to end up with web page bloat. But loading a webpage is much more than shipping bytes down the wire. Once the browser has downloaded our page’s scripts it then has to parse, interpret & run them. In this post, we’ll dive into this phase for JavaScript, why it might be slowing down your app’s start-up & how you can fix it.

Historically, we just haven’t spent a lot of time optimizing for the JavaScript Parse/Compile step. We almost expect scripts to be immediately parsed and executed as soon as the parser hits a <script> tag. But this isn’t quite the case. Here’s a simplified breakdown of how V8 works:

A simplified view of how V8 works. This is our idealized pipeline that we’re working towards.

Let’s focus on some of the main phases.

What slows our web apps from booting up?

Parsing, Compiling and Executing scripts are things a JavaScript engine spends significant time in during start-up. This matters as if it takes a while, it can delay how soon users can interact with our site. Imagine if they can see a button but not click or touch it for multiple seconds. This can degrade the user experience.

Parse & Compile times for a popular website using V8’s Runtime Call Stats in Chrome Canary. Notice how a slow Parse/Compile on desktop can take far longer on average mobile phones.

Start-up times matter for performance-sensitive code. In fact, V8 – Chrome’s JavaScript engine, spends a large amount of time parsing and compiling scripts on top sites like Facebook, Wikipedia and Reddit:

The pink area (JavaScript) represents time spent in V8 and Blink’s C++, while the orange and yellow represent parse and compile.

Parse and Compile have also been highlighted as a bottleneck by a number of large sites & frameworks you may be using. Below are tweets from Facebook’s Sebastian Markbage and Google’s Rob Wormald:

Sam Saccone calls out the cost of JS parse in ‘Planning for Performance’

As we move to an increasingly mobile world, it’s important that we understand the time spent in Parse/Compile can often be 2–5x as long on phones as on desktop. Higher-end phones (e.g the iPhone or Pixel) will perform very differently to a Moto G4. This highlights the importance of us testing on representative hardware (not just high-end!) so our users’ experiences don’t suffer.

Parse times for a 1MB bundle of JavaScript across desktop & mobile devices of differing classes. Notice how close a high-end phone like an iPhone 7 is to perf on a Macbook Pro vs the performance as we go down the graph towards average mobile hardware.

If we’re shipping huge bundles for our app, this is where endorsing modern bundling techniques like code-splitting, tree-shaking and Service Worker caching can really make a huge difference. That said, even a small bundle, written poorly or with poor library choices can result in the main thread being pegged for a long time in compilation or function call times. It’s important to holistically measure and understand where our real bottlenecks are.

What Are JavaScript Parse & Compile bottlenecks for the average website?

“Buuuut, I’m not Facebook”, I hear you say dear, reader. “How heavy are Parse & Compile times for average sites out in the wild?”, you might be asking. Let’s science this out!

I spent two months digging into the performance of a large set of production sites (6000+) built with different libraries and frameworks — like React, Angular, Ember and Vue. Most of the tests were recently redone on WebPageTest so you can easily redo them yourself or dig into the numbers if you wish. Here are some insights. For Website development services check Vivid Designs

Apps became interactive in 8 seconds on desktop (using cable) and 16 seconds on mobile (Moto G4 over 3G)

What contributed to this? Most apps spent an average of 4 seconds in start-up (Parse/Compile/Exec)..on desktop.

On mobile, parse times were up to 36% higher than they were on desktop.

Was everyone shipping huge JS bundles? Not as large as I had guessed, but there’s room for improvement. At the median, developers shipped 410KB of gzipped JS for their pages. This is in line with the 420KB over ‘average JS per page’ reported by the HTTPArchive. The worst offenders were sending anywhere up to 10MB of script down the wire. Oof.

HTTPArchive stat: the average page ships down 420KB of JavaScript

Script size is important, but it isn’t everything. Parse and Compile times don’t necessarily increase linearly when the script size increases. Smaller JavaScript bundles generally do result in a faster load time (regardless of our browser, device & network connection) but 200KB of our JS !== 200KB of someone else’s and can have wildly different parse and compile numbers.

Measuring JavaScript Parse & Compile today

Chrome DevTools

Timeline (Performance panel) > Bottom-Up/Call Tree/Event Log will let us drill into the amount of time spent in Parse/Compile. For a more complete picture (like the time spent in Parsing, Preparsing or Lazy Compiling), we can turn on V8’s Runtime Call Stats. In Canary, this will be in Experiments > V8 Runtime Call Stats on Timeline.

Chrome Tracing

about:tracing — Chrome’s lower-level Tracing tool allows us to use the `disabled-by-default-v8.runtime_stats` category to get deeper insights into where V8 spends its time. V8 have a step-by-step guide on how to use this that was published just the other day.

WebPageTest

WebPageTest’s “Processing Breakdown” page includes insights into V8 Compile, EvaluateScript and FunctionCall time when we do a trace with the Chrome > Capture Dev Tools Timeline enabled.

We can now also get out the Runtime Call Stats by specifying `disabled-by-default-v8.runtime_stats` as a custom Trace category (Pat Meenan of WPT now does this by default!).

For a guide on how to get the most out of this, see this gist I wrote up.

User Timing

It’s possible to measure Parse times through the User Timing API as Nolan Lawson points out below:

The third <script> here isn’t important, but it’s the first <script> being separate from the second (performance.mark() starting before the <script> has been reached) that is.

This approach can be affected on subsequent reloads by V8’s preparser. This could be worked around by appending a random string to the end of the script, something Nolan does in his optimize-js benchmarks.

I use a similar approach for measuring the impact of JavaScript Parse times using Google Analytics:

A custom Google Analytics dimension for ‘parse’ allows me to measure JavaScript parse times from real users and devices hitting my pages in the wild.

DeviceTiming

Etsy’s DeviceTiming tool can help measure parse & execution times for scripts in a controlled environment. It works by wrapping local scripts with instrumentation code so that each time our pages are hit from different devices (e.g laptops, phones, tablets) we can locally compare parse/exec. Daniel Espeset’s Benchmarking JS Parsing and Execution on Mobile Devicesgoes into more detail on this tool.

What can we do to lower our JavaScript parse times today?

  • Ship less JavaScript. The less script that requires parsing, the lower our overall time spent in the parse & compile phases will be.
  • Use code-splitting to only ship the code a user needs for a route and lazy load the rest. This probably is going to help the most to avoid parsing too much JS. Patterns like PRPL encourage this type of route-based chunking, now used by Flipkart, Housing.com and Twitter.
  • Script streaming: In the past, V8 have told developers to use `async/defer` to opt into script streaming for parse-time improvements of between 10–20%. This allows the HTML parser to at least detect the resource early, push the work to the script streaming thread and not halt the document parsing. Now that this is done for parser-blocking scripts too, I don’t think there’s anything actionable we need to do here. V8 recommend loading larger bundles earlier on as there’s only one streamer thread (more on this later)
  • Measure the parse cost of our dependencies, such as libraries and frameworks. Where possible, switch them out for dependencies with faster parse times (e.g switch React for Preact or Inferno, which require fewer bytes to bootup and have smaller parse/compile times). Paul Lewis covered framework bootup costs in a recent article. As Sebastian Markbage has also noted, a good way to measure start-up costs for frameworks is to first render a view, delete and then render again as this can tell you how it scales. The first render tends to warm up a bunch of lazily compiled code, which a larger tree can benefit from when it scales.

If our JavaScript framework of choice supports an ahead-of-time compilation mode (AoT), this can also help heavily reduce the time spent in parse/compile. Angular apps benefit from this for example: Top web development company in Hyderabad visit Vivid Designs 

Nolan Lawson’s ‘Solving the Web Performance Crisis’

What are browsers doing to improve Parse & Compile times today?

Developers are not the only ones to still be catching up on real-world start-up times being an area for improvement. V8 discovered that Octane, one of our more historical benchmarks, was a poor proxy for real-world performance on the 25 popular sites we usually test. Octane can be a poor proxy for 1) JavaScript frameworks (typically code that isn’t mono/polymorphic) and 2) real-page app startup (where most code is cold). These two use-cases are pretty important for the web. That said, Octane isn’t unreasonable for all kinds of workloads.

The V8 team has been hard at work improving start-up time and we’ve already seem some wins here:

We also estimate a 25% improve on V8 parse times for many pages looking at our Octane-Codeload numbers:

And we’re seeing wins in this area for Pinterest too. There are a number of other explorations V8 has started over the last few years to improve Parsing and Compile times.

Code caching

From using V8’s code caching

Chrome 42 introduced code caching — a way to store a local copy of compiled code so that when users returned to the page, steps like script fetching, parsing and compilation could all be skipped. At the time we noted that this change allowed Chrome to avoid about 40% of compilation time on future visits, but I want to provide a little more insight into this feature:

  • Code caching triggers for scripts that are executed twice in 72 hours.
  • For scripts of Service Worker: Code caching triggers for scripts that are executed twice in 72 hours.
  • For scripts stored in Cache Storage via Service Worker: Code caching triggers for scripts in the first execution.

So, yes. If our code is subject to caching V8 will skip parsing and compiling on the third load.

We can play around with these in chrome://flags/#v8-cache-strategies-for-cache-storage to look at the difference. We can also run Chrome with — js-flags=profile-deserialization to see if items are being loaded from the code cache (these are presented as deserialization events in the log).

One caveat with code caching is that it only caches what’s being eagerly compiled. This is generally only the top-level code that’s run once to setup global values. Function definitions are usually lazily compiled and aren’t always cached. IIFEs (for users of optimize-js ;)) are also included in the V8 code cache as they are also eagerly compiled.

Script Streaming

Script streaming allows async or defer scripts to be parsed on a separate background thread once downloading begins and improves page loading times by up to 10%. As noted earlier, this now also works for sync scripts.

Since the feature was first introduced, V8 have switched over to allowing all scriptseven parser blocking <script src=””> to be parsed on a background thread so everyone should be seeing some wins here. The only caveat is that there’s only one streaming background thread and so it makes sense to put our large/critical scripts in here first. It’s important to measure for any potential wins here.

Practically, <script defer> in the <head> so we can discover the resource early and then parse it on the background thread.

It’s also possible to check with DevTools Timeline whether the correct scripts get streamed — if there’s one big script that dominates the parse time, it would make sense to make sure it’s (usually) picked up by the streaming.

Better Parsing & Compiling

Work is ongoing for a slimmer and faster Parser that frees up memory and is more efficient with data structures. Today, the largest cause of main thread jank for V8 is the nonlinear parsing cost. Take a snippet of UMD:

(function (global, module) { … })(this, function module() { my functions })

V8 won’t know that module is definitely needed so we won’t compile it when the main script gets compiled. When we decide to compile module, we need to reparse all of the inner functions. This is what makes V8’s parse-times non-linear. Every function at n-th depth is parsed n times and causes jank.

V8 are already working on collecting info about inner functions during the initial compile, so any future compilations can ignore their inner functions. For module-style functions, this should result in a large perf improvement.

See ‘The V8 Parser(s) — Design, Challenges, and Parsing JavaScript Better’ for the full story.

V8 are also exploring offloading parts of JavaScript compilation to the background during startup.

Precompiling JavaScript?

Every few years, it’s proposed engines offer a way to precompile scripts so we don’t waste time parsing or compiling code pops up. The idea is if instead, a build-time or server-side tool can just generate bytecode, we’d see a large win on start-up time. My opinion is shipping bytecode can increase your load-time (it’s larger) and you would likely need to sign the code and process it for security. V8’s position is for now we think exploring avoiding reparsing internally will help see a decent enough boost that precompilation may not offer too much more, but are always open to discussing ideas that can lead to faster startup times. That said, V8 are exploring being more aggressive at compiling and code-caching scripts when you update a site in a Service Worker and we hope to see some wins with this work.

We discussed precompilation at BlinkOn 7 with Facebook and Akamai and my notes can be found here.

The Optimize JS lazy-parsing parens ‘hack’

JavaScript engines like V8 have a lazy parsing heuristic where they pre-parse most of the functions in our scripts before doing a complete round of parsing (e.g to check for syntax errors). This is based on the idea that most pages have JS functions that are lazily executed if at all.

Pre-parsing can speed up startup times by only checking the minimal a browser needs to know about functions. This breaks down with IIFEs. Although engines try to skip pre-parsing for them, the heuristics aren’t always reliable and this is where tools like optimize-js can be useful.

optimize-js parses our scripts in advance, inserts parenthesis where it knows (or assumes via heuristics) functions will be immediately executed enabling faster execution. Some of the paren-hacked functions are sure bets (e.g IIFEs with !). Others are based on heuristics (e.g in a Browserify or Webpack bundle it’s assumed all modules are eagerly loaded which isn’t necessarily the case). Eventually, V8 hopes for such hacks to not be required but for now this is an optimization we can consider if we know what you’re doing.

V8 are also working on reducing the cost for cases where we guess wrong, and that should also reduce the need for the parens hack

Conclusions

Start-up performance matters. A combination of slow parse, compile and execution times can be a real bottleneck for pages that wish to boot-up quickly. Measure how long your pages spend in this phase. Discover what you can do to make it faster.

We’ll keep working on improving V8 start-up performance from our end as much as we can. We promise 😉 Happy perfing!

Source

Angular 2 Forms Tutorial – Validation

Introduction

In the first part of this Angular 2 Forms series we’ve created a first form component in Angular 2. This was just a simple form consisting of the following input elements:

In the first part of this Angular 2 Forms series we’ve created a first form component in Angular 2. This was just a simple form consisting of the following input elements:This form has been implemented by using the template-driven forms approach of Angular 2. This means that a component’s template was used to arrange the forms HTML elements. In addition Angular 2 form directives have been used in the template to enable the framework to construct the internal control model that implements form functionality. The following template code was used: For Top web design company check Vivid Designs

<div class="container">
  <h1>Book Form:</h1>
  <form>
    <div>
      <label for="title">Title</label>
      <input type="text" class="form-control" id="title" required [(ngModel)]="model.title" name="title">
    </div>
    <div>
      <label for="author">Author</label>
      <input type="text" class="form-control" id="author" required [(ngModel)]="model.author" name="author">
    </div>
    <div>
      <label for="url">URL</label>
      <input type="text" class="form-control" id="url" required [(ngModel)]="model.url" name="url">
    </div>
    <button type="submit" class="btn btn-default">Submit</button>
  </form>
  <div>
    <h2>Model:</h2>
    {{ currentBook }}
  </div>
</div>

In this second part of the Angular 2 Forms series we’re going to focus on another important aspect of form creation: input validation. Angular 2 makes form validation very easy. In the following you’ll learn how to use apply form validation by using HTML validation attributes and Angular 2 validation functionality.

Adding HTML Validation Attributes To Input Elements

Form validation in Angular 2 is based on HTML validation attributes. HTML validation attribute are used in input elements. One validation attribute has already been applied to all three input elements of our form: required. The required attribute defines that entering a value in the input field is mandatory. A full list of HTML validation attributes can be found at https://developer.mozilla.org/en-US/docs/Web/Guide/HTML/HTML5/Constraint_validation.

Now we’re going to add to more validation attributes to the first input field of the book title. Let’s define that we want the user to input a title which has a length between 5 and 30 characters:

<input type="text" class="form-control" id="title" required minlength="5" maxlength="30" [(ngModel)]="model.title" name="title">

To implement that constraint we’re adding the HTML validation attributes minlength and maxlength to the input element

Furthermore we’re using the pattern HTML validation attribute to for the URL input field:

<input type="text" class="form-control" id="url" required pattern="https?://.+" [(ngModel)]="model.url" name="url">

Herewith we make sure that only valid URLs starting with http:// or https://can be entered in this input field.

Adding Validation Error Messages To The Form

In the next step we’re going to include error messages in the form template. If a certain validation rule is not met these messages should be displayed to the user:

<div class="container">
  <h1>Book Form:</h1>
  <form>
    <div class="form-group">
      <label for="title">Title</label>
      <input type="text" class="form-control" id="title" required minlength="5" maxlength="30" [(ngModel)]="model.title" name="title" #title="ngModel">
      <div *ngIf="title.errors && (title.dirty || title.touched)" class="alert alert-danger">
        <div [hidden]="!title.errors.required">
          Book title is required!
        </div>
        <div [hidden]="!title.errors.minlength">
          Title must be at least 5 characters long.
        </div>
        <div [hidden]="!title.errors.maxlength">
          Title cannot be more than 30 characters long.
        </div>
      </div>
    </div>
    <div class="form-group">
      <label for="author">Author</label>
      <input type="text" class="form-control" id="author" required [(ngModel)]="model.author" name="author" #author="ngModel">
      <div *ngIf="author.errors && (author.dirty || author.touched)" class="alert alert-danger">
        <div [hidden]="!author.errors.required">
          Book author is required!
        </div>
      </div>
    </div>
    <div class="form-group">
      <label for="url">URL</label>
      <input type="text" class="form-control" id="url" required pattern="https?://.+" [(ngModel)]="model.url" name="url" #url="ngModel">
      <div *ngIf="url.errors && (url.dirty || url.touched)" class="alert alert-danger">
        <div [hidden]="!url.errors.required">
          URL is required!
        </div>
        <div [hidden]="!url.errors.pattern">
          Must be a valid URL!
        </div>
      </div>
    </div>
    <div>
      <button type="submit" class="btn btn-default">Submit</button>
    </div>
  </form>
  <div>
    <h2>Model:</h2>
    {{ currentBook }}
  </div>
</div>

First of all notice that template variables have been introduced for all three input elements by adding

  • #title="ngModel" to the title input control
  • #author="ngModel” to the author input control
  • #url="ngModel" to the URL input control

By using the variables titleauthor and url in the code we now have access to the form controls. We are able to check if the form control is in an error state and display messages to the user. Best web development company in Hyderabad visit Vivid Designs

To display error messages a div element is included for every input element

<div *ngIf="title.errors && (title.dirty || title.touched)" class="alert alert-danger"> ... </div>

NgIf is used to only display the content of this element if the assigned expression string is valid. The expression becomes valid if the control is in an error state (title.errors is true) and at the same time the control is marked is dirty (title.dirty is true) or is marked as touched (title.touch is true). This ensures the error messages are not displayed initially. If the dirty flag is set the value of the input element has been changed by the user. If the touched flag is set to true the control has been visited by the user.

For each error message another div block is placed inside of the previously described block:

<div [hidden]="!title.errors.required">
Book title is required!
</div>

The hidden attribute is bound to the negated value of the respective error. E.g. if title.errors.required is true (which means the field value is empty) the hidden attribute is set to false, so that the error message is displayed.

Using Angular 2 Validation CSS Classes

Angular 2 automatically attached CSS classes to the input elements depending on the state of the control. The following class names are used:

  • ng-touched: Control has been visited
  • ng-untouched: Control has not been visited
  • ng-dirty: Control’s value has been changed
  • ng-pristine: Control’s value hasn’t been changed
  • ng-valid: Control’s value is valid
  • ng-invalid: Control’s value isn’t valid

We can make use of those class by defining CSS styling which gives additional visual feedback to the user. Insert the following code into file book-form.component.css:

.ng-valid[required], .ng-valid.required  {
  border-left: 5px solid #42A948; /* green */
}
.ng-invalid:not(form)  {
  border-left: 5px solid #a94442; /* red */
}

First, a red border is display on the left side of the input controls, indicating that a value is missing. If the user starts typing the and the field constraints are fulfilled the color changes to green.

Form Validation

The validation logic we’ve implemented so far is specific for single input fields of the form. We’re able to extend the logic to also take into consideration the validation status of the complete form. The evaluate if a form is valid or invalid can be useful to e.g. control if the form can be submitted or not.

First let’s introduce a new template variable for the form itself:

<form #bookForm="ngForm">

With that code in place we’re able to retrieve the validity status of the form by using bookForm.form.valid. Only if all input controls of the form a valid the form becomes valid too.

The form validity status can now be used together with the disabled attribute of the form’s submit button:

<button type="submit" class="btn btn-default" [disabled]="!bookForm.form.valid">Submit</button>

Now the submit button is only enables of the form is in valid state.

The final result can be seen in the following:

Credit

Mapping The Dominance Of Airbnb On Athens

If you are looking for website development in Madhurai for your company visit/check Vivid Designs

Airbnb has effectively created a new category of rental housing. This new category — called ‘short-term rentals’ — exemplifies a software-driven, platform-mediated market that occupies the gap between traditional residential rental housing and hotel accommodation.

A prime example of the corporate sharing economy, the American company operates an online marketplace and — with nearly 5 million listings in 81,000 cities and over 300 million check-ins — it has established itself as the world’s largest peer-to-peer hospitality intermediary. Guests benefit from the ease of use, decent rates, access to peer reviews, and a variety of housing options in neighborhoods not traditionally geared to tourism. Hosts benefit from the access to a huge audience, flexible living arrangements, and a steady flow of extra income in these times of economic crisis.

Nonetheless, Airbnb’s impact on cities and housing markets is not immediately obvious.

On the positive side, the company claims that the short-term rental market increases tourism and its economic benefits. It also provides additional income for hosts, particularly those who would not otherwise rent out their housing unit or rooms to longer term tenants, while benefiting neighborhoods that tourists traditionally do not visit, bringing additional customers to local businesses.

On the negative side, local communities and housing advocates point out that Airbnb is making it easier to illegally rent out apartment units to tourists, while taking those units off the market for full-time residents and driving housing costs higher, negatively affecting the quality of life in residential areas.

Accordingly, hotel associations are concerned that short-term rentals function as hotels but have an unfair advantage because they don’t pay taxes and violate safety and zoning regulations.

Attempts to regulate Airbnb, however, have encountered a significant pushback from the company, which summons a powerful weapon through disruptive business and lobbying strategies and by mobilising its community to protest proposed reforms and expand its political influence. While Airbnb and its defenders insist that these reforms must be updated to accommodate the new possibilities presented by the sharing economy, its opponents argue that Airbnb aims to avoid regulation and taxation, and threatens affordable housing in cities.

The company, which is based in San Francisco, was founded in 2008 as a way for people to easily list and rent out their spare rooms or their homes online. There has been a widespread concern, however, that a large amount of the activity on Airbnb is not ‘home sharing’, but rather a new form of de facto hotel that fuels gentrification and displacement.

In response to this concern, I set out to find how Airbnb is really being used in and affecting the Greek capital.

Airbnb’s increasing growth in Athens To understand the impact of Airbnb on housing in Athens, I downloaded and analysed a dataset compiled by the independent, non-commercial monitoring service Inside Airbnb, which tracks the flow of ads on the online platform.

The data covers the city centre of the Athens urban area, the largest in Greece and one of the most populated urban areas in Europe, sprawling across the central plain of Attica. The study period is May 26, 2009 — May 9, 2017, and every Airbnb listing which existed at any point in this period in the city centre of the Athens urban area has been included in the analysis. Top web design company visit Vivid Designs

In 2017, there were 5,127 listings reserved at least once on Airbnb in Athens — a 6.8% increase from the previous year (4,801 listings), and a 56.5% increase from 2015 (3,275 listings).

It is not the case that there are 5,127 listings receiving reservations, but rather the fact is that, on average, half of the listings available in Athens in a given month receive at least one review from a guest.

Serving as the glue of the community, reviews can be used as an indicator of Airbnb activity. This metric lets hosts or guests to leave a detailed review of their experience. Receiving a positive review from a guest is absolutely vital for a host to have more reservations and they usually encourage guests to leave a review. According to Niels van Doorn, Assistant Professor of New Media and Digital Culture at the University of Amsterdam and Principal Investigator of the Platform Labor research project, “such ratings have become a major decentralized and scalable management technique that outsources quality control to customers of on-demand platforms, creating a generalized audit culture in which service providers are continually pushed to self-optimize and cater to the customer’s every whim.”

Indeed, serious Airbnb entrepreneurs may well refurbish their units to increase their success with the service. But still, the only necessary step for converting a long-term rental to a short-term one and for scoring quick money through Airbnb is just to evict the existing tenants, or not replace them when they depart.

Entire-home listings of high availability dominate Athens Before proceeding, let’s remind ourselves that helping each other out by sharing our rooms or houses is one thing while commodifying them by charging a fee for their use is quite another. And this leads us to the more innovative aspect of the sharing economy, which is to disturb our material reality; according to author and critic Sebastian Olma, “to coordinate supply and demand of products and services that in their present form were previously unavailable on the market;” in our case, to allow people to sublet their houses.

It seems quite obvious, then, that, despite Airbnb’s outreach focus on small scale and occasional uses of its platform (the way, for example, that homeowners can pay their household regular expenses by hosting guests occasionally), most regulatory scrutiny of short-term rentals has been focused on entire homes or apartments. Of course, this scrutiny is not just limited to the room types in a given area — especially when it comes to Athens, where most of the listings are entire homes or apartments — but it is also extended to a listing’s availability and activity.

An Airbnb host can setup a calendar for their listing so that it is only available for a few days or weeks a year, while other listings are available all year round. Depending on its availability and activity, a home converted to full-time Airbnb use could be more like a hotel, disruptive for neighbors, taking away housing, and thereby fuelling gentrification across the city.

Yes, 91.6% of the Airbnb listings in Athens are available for more than 60 days a year. Keep in mind that the calculation of availability for each listing through insideairbnb.com tracks whether a listing is reported as available or unavailable on its calendar. This approach does not differentiate booked from unavailable properties, which means that the statistics could have underestimated the availability of properties.

Occasional VS commercial operators Short-term rentals may also impact the housing shortage in Athens by offering a more lucrative alternative or a more flexible living arrangement to listing a unit on the long-term rental market.

With an average price of €55 per night across Athens, it would not be a hustle for some landlords to evict a tenant for the financial benefits of entering the sharing economy, right? The essence of sharing, however, does not involve the exchange of money. Again, sharing only happens in the absence of market transactions.

This is why the image of a family occasionally renting a spare room in their home, or perhaps renting their entire home for a brief period of time while they are out of town, is not representative of Athens anymore (as if it ever was).

But how can we distinguish commercial from occasional operators?

One way to do this is to look at hosts who have multiple listings on Airbnb. By definition a commercial operator is a host with more than one entire home listing, since only one of their listings could be their primary residence.

Estimating commercial operators this way will dramatically underestimate their numbers, since it will fail to identify hosts who have a single listing which is not their primary residence and which they run as a business. It will also fail to identify hosts who operate their listings via multiple Airbnb accounts. It is a useful first approximation though.

A ‘multi-listing’ is, thus, defined as an entire-home listing whose host has at least one other entire-home listing, or a private-room listing whose host has at least two other private-room listings.

Commercial operators that control multiple entire-home/apartment listings or large portfolios of private rooms are a 43.8% of hosts in Athens.

Here’s the top 10 of them for 2017:

Eazybnb Team — 58 listings George — 47 Dean — 43 Miglen — 29 Homm — 25 Helena —20 Home Rentality — 19 Dima — 17 Cleopatra — 17 Blueground — 16 The ghost hotels of Athens Most discussion of Airbnb’s impact on housing availability and affordability focuses on entire-home listings and for good reason. These are the listings which, if rented sufficiently often throughout the year, can no longer be housing a long-term tenant. Private room listings, by contrast, are generally assumed to have little if any impact on housing markets, since they generally do not displace renters. Best Website development services in Chennai

If we look at groupings of private rooms rented by a single host in the same building, or in what the Canadian housing advocacy group Fairbnb has called ‘ghost hotel’, this assumption is clearly false.

Most ghost hotels in Athens comprise 2 to 5 private-room listings. The most striking fact about Athens’ ghost hotels, apart from simply their existence, is that one of them has 40 private-room listings and that it is a real hotel.

Among others, ghost hotels in Athens include:

a seven-bedroom “gem of 1930 Bauhaus, featuring 3 independent apartments in the same building” and “two separate apartments in the same building that can host up to 13 persons”

Plaka, Exarchia, and Koukaki, top the Airbnb chart, surprises no one I have noticed that when there is a discussion involving affordable housing and Airbnb in Athens, someone will come forward and reference Koukaki as the #1 exclusive AirBnB zone in the city. Almost true, given the small size of the area.

Following the neighborhoods of Plaka and Exarchia, Koukaki ranks third on the Airbnb chart of Athens with 343 listings, and fifth on Airbnb’s top 16 neighborhoods to visit in 2016, with 801% growth from 2015.

FYI: at the time of this writing, there are only 65 apartments available in Koukaki on www.xe.gr, the leading long-term rental platform in Greece.

As you might wonder, ‘#1 Event Venue In Athens | Acropolis View 360°!!!’ is the reason why Kerameikos is the most expensive neighborhood in Athens.

With an average price of €85 per night, Plaka ranks as the #2 most expensive neighborohood in the short-term rental market of Athens, followed by Pentagono, Rigillis, and Zappeion with €81, €75, and €74 per night respectively. I was hoping to see Kolonaki in the top 5, but it’s just #6 with €72,4 per night on average.

Taking those metrics one step further, the map below illustrates the distribution of all 5,127 Airbnb listings in the city centre of the Athens on May 9, 2017. Each circle represents a listing. The size of the circle is the listing’s price per night, and the colour is the room type.

Lastly, let’s take a closer look at the 5 neighborhoods with the greatest concentration of Airbnb listings in Athens.

Hopefully this data adds a bit more evidence to the discussion around the dominance of Airbnb in Athens and how it has become a symbol for quick value extraction. Now, the question is: How can we make the transition from a corporate consumer-driven to a citizen-centric sharing economy?

I’d love to hear your examples of non-profit, community-based alternatives and how they support forms of exchange that could actually be called sharing.

Notes on methodology

I initially became interested in exploring the dominance of Airbnb in Athens after discovering Inside Airbnb’s dataset, and settled on this methodology after coming across UPGO’s analysis of Airbnb in New York zip codes.

Once I isolated the names of each neighborhood in Athens, I queried the dataset to search each neighborhood for available Airbnb listings (focusing on the three available room types), their hosts, availability, and price/night, among others. Using the lat/long coordinates for each listing, I mapped each listing to its neighborhood. All code is written in Python.

Disclaimers

Airbnb provides NO PUBLIC DATA to help understand the use of their platform and the impact on cities around the world. Airbnb also provides NO DATA to cities or states to assist them in ensuring that Airbnb hosts and Airbnb are following the local laws. Tom Slee regularly scrapes the Airbnb site to produce maps and analysis of Airbnb use around the world. The data utilizes public information compiled from the Airbnb web-site including the availabiity calendar for 365 days in the future, and the reviews for each listing. Data is verified, cleansed, analyzed and aggregated. No ‘private’ information is being used. Names, photographs, listings and review details are all publicly displayed on the Airbnb site.

Credit

Follow these simple rules and you’ll become a Git and GitHub master

Best Website deign services in Ahmedabad

In this article, I won’t cover how to create a GitHub profile or how to use the terminal to make a Git commit. Instead, I will explain why using Git and GitHub every day is so important, especially for those of you who are learning to code. I’ll also share and discuss the three simple rules that you can easily follow to become a master Git and GitHub user.

Why are Git and GitHub so important? If you are learning to code, chances are your most important goal is to eventually get a job as a software developer. In that case, the answer is very simple:

Learning Git and GitHub is incredibly important because 99% of the companies that can hire you will use Git and GitHub. Therefore, learning how to work with Git and GitHub make you more hirable and help you differentiate yourself from more junior developers.

What makes senior developers senior is not that they know the syntax of a given language better, but that they have experience working with large and complex projects with real users and business goals.

When you are learning to code, it’s hard to get that kind of experience. However, a simple way of getting real-world experience is by using the tools and methodologies used in real-world projects. Git and GitHub are an example of those.

Other things you can do are remote pair programming, contributing to open source, and building professionally-designed websites for your portfolio.

Even if you agree that mastering Git and GitHub will help you get a job, you might still be wondering:

“Why are Git and Github so important for companies?”

The short answer is that Git allows teams to efficiently and effectively contribute code to the same project in an asynchronous way. This empowers teams to collaborate better and thus allows them to solve bigger and more complex problems.

Git, which is a distributed version control system, also provides mechanisms to revert changes, create branches of code, solve merge conflicts, and so on. Those are very useful features that solve specific and common problems that every software team faces every day. And Git is the dominant solution nowadays.

GitHub, on the other hand, is an added layer on top of Git that provides solutions to other specific and common problems such as code reviews, pull requests, issue management/bug tracking, and so on.

Quick note: Even though Git is the go-to version control solution for most companies, GitHub has some strong competitors such as GitLab and Bitbucket. However, if you know how to use GitHub, you won’t have any problem working with GitLab or Bitbucket.

Now that you know why it’s so important to master Git and Github, it’s time to tell you the three simple rules to follow to easily become a professional Git and Github user while you are still learning to code.

How to master Git and Github with 3 simple rules

Just for some additional context, I’m the founder of Microverse, a school for remote software developers that is completely free until you get a job. As part of our 22-week program, we not only teach our students how to code, but we also give them plenty of guidance and structure for them to get real-world experience while in the program. Top web development company in Visakhapatnam

One of the things we ask our students to do is to follow the three rules you will find below in order to become professional Git and Github users. By the end of the training, working with Git, GitHub, branches, pull requests and code reviews becomes second nature for our students.

Before I go ahead and discuss the three simple rules for mastering Git and Github, please consider completing the following tasks:

If you are not familiar with Git or GitHub yet, you should complete this awesome tutorial from HubSpot. If you don’t know what the GitHub Flow is, you should learn about Github Flow since we will use it below. And now, without much further ado, the three simple rules to master Git and Github while learning how to code…

Rule #1: Create a Git repository for every new project Rule #2: Create a new branch for every new feature Rule #3: Use Pull Requests to merge code to Master Even if you are working on small and simple projects, and even if you are working alone, following those three rules every time you code will make you a Git and GitHub master user very quickly.

Let’s briefly break down each one of the rules so you understand what you are supposed to do and why each rule is important.

Rule #1: Create a Git repository for every new project This first rule is quite straightforward, but making a habit out of it is very important. Every time you start working on something new — your portfolio, a learning project, a solution to a coding challenge, and so on — you should create a new Git repository and push it to GitHub.

Having a dedicated repo is the first step to being able to use version control for every line of code you write. Using version control is how you will work once you join a company and start working on real-world projects. Learn this early and make it a habit.

Quick Note: if using the terminal becomes a hassle and makes you less likely to use Git for all your projects, consider using the Github Desktop app.

Rule #2: Create a new branch for every new feature Let’s say you are working on your portfolio and you want to build a new “Contact me” section/component. Create a dedicated branch for this new feature, give it a meaningful name (e.g. contact-me-section), and commit all the code to that specific branch.

If you don’t know what branches are, go back to the Github Flow reading that I recommended before.

Working with branches allows you and your team members to work on different features in a parallel way while keeping the specific code for each feature isolated from the rest. This makes it harder for unstable code to get merged into the main code base.

Even if you are the only person on your team, getting used to using feature branches will make the Github Flow process a breeze once you join a real job.

Rule #3: Use Pull Requests to merge code to Master Every repository starts with a master branch by default. You should never push changes directly to the master branch. Instead, you should use feature branches as described above, and open a new Pull Request to merge the feature branch code with the master branch code.

In a real job, someone will look at your Pull Request and do a code review before approving it. GitHub will even run automated tests to your code and let you know if there is an issue with it. You will also be notified if there is any merge conflict between your code and the code in the master branch. This can happen, for example, if another developer pushed a change to the master branch that affects a file that you also modified.

After your code has been reviewed, tested, and approved, your reviewer will give you thumbs up for you to merge the Pull Request, or they will directly merge your pull request.

Even if you are working alone, get used to creating Pull Requests as a way to merge your changes to the master branch. This, by the way, is the basic workflow used by almost every open source project. If you ever contribute to one (you should!), understanding this three rules will make it really easy for you to get your contribution accepted without any problem.

Wrapping up If you are still confused, just start slow and keep the three rules in mind. Don’t try to think about “How” to do things yet and focus on “What” to do and “Why” it’s important for now.

Once the “What” and the “Why” are clear, you can figure out the “How” when the times comes to do things. Once you have repeated this process 2–3 times, it will become really easy and natural for you.

I publish new articles every week based on the things that we teach our full-time students. I focus on practical tips and hacks that will make you learn fast while at the same time helping you build strong soft skills and making you more hirable. If you want to stay in touch, you can follow me on Twitter.

Source