Friday, October 28, 2011

20 Best Practices for Speeding up Your Website


ShareThis

1.    Make fewer HTTP requests
•    Use CSS Image Sprites & Image Maps
•    Combine multiple script/css files into one
•    Use inline embedded images in the HTML

2.    Use a CDN
3.    Add an Expires or a Cache-Control Header
4.    Gzip components
5.    Put stylesheets at the top & scripts at the bottom
6.    Make JavaScript and CSS external
7.    Reduce DNS lookups
8.    Minify JavaScript, CSS and HTML
9.    Avoid Redirects
10.    Make Ajax cacheable
11.    Remove duplicate scripts
12.    Configure ETags
13.    Use GET for AJAX Requests
14.    Post-load Components
15.    Preload Components
16.    Reduce the Number of DOM Elements and events
17.    Split Components Across Domains
18.    Minimize the Number of iframes
19.    Optimize Images
20.    Keep Components under 25K

Spectrum of Modern Web Applications


ShareThis

I am more fascinated these days with the new approaches developers are discovering to author modern web aps. Java Script and JQuery is the center of the attraction in all the approaches. Sharing here, the summary of what I read in the documentation of a Microsoft initiative called Project Silk  to author modern web aps..

Spectrum of Web Applications

There is a spectrum of web applications being built today that can be grouped into four application types. These types of web applications are categorized by their full-page reload behavior and the amount of client-side interactivity they provide. Each application type provides a richer experience than the one listed before it.

  • Static sites. These consist of static HTML pages, CSS, and images. They are static in that as each page is navigated to, the browser performs a full-page reload and there is no interaction with portions of the page. In addition, the page does not change no matter who requests it or when.
  • Server rendered. In this model, the server dynamically assembles the pages from one or more source files and can incorporate data from another source during the rendering. The client-side script in these applications might perform some data validation, simple hover effects, or Ajax calls. As each page is navigated to, the browser performs a full-page reload. ASP.NET applications that don't make heavy use of client-side JavaScript are examples of server-rendered web applications.
  • Hybrid design. This model is similar to the server-rendered web application, except that it relies heavily on client-side JavaScript to deliver an engaging experience. This type of application has islands of interactivity within the site that do not require full-page reloads to change the UI as well as some pages that do require a full-page reload. Mileage Stats is an example of a hybrid design.
  • Single-page interface. In this model, a full-page load happens only once. From that point on, all page changes and data loading is performed without a full-page reload. Hotmail, Office Live, and Twitter are examples of single-page-interface web applications.

Characteristics of Modern Web Applications

While there are many types of modern web applications, addressing many different needs, they all have some characteristics in common.

  • They are standards-focused. To have the broadest reach across multiple platforms and devices, applications attempt to implement the current and evolving standards and adopt future standards once ratified.
  • They are interactive. Modern web applications keep the user engaged by providing constant feedback on their actions. This feedback can come in the form of messages, animations to hide or show elements, mouse-over effects, drag and drop feedback, the automatic refreshing of screen data, animation of various elements, or the implementation of fade-in or fade-out effects. Interactive applications leverage the fast JavaScript engines in modern browsers to perform their client-side tasks.
  • They limit full-page reloads. Modern web applications seek to limit the number of full-page reloads. Reloads are much slower than a localized Ajax call to update a portion of the UI. Full-page reloads also limit the ability to animate state or page changes. By not performing a full-page reload, users can be kept in context, providing a fluid experience as they navigate from one task to another.
  • They are asynchronous. Modern web applications use Ajax to dynamically load data, page fragments, or other assets instead of performing a full-page reload to acquire data or HTML content. Because the loading of data is asynchronous, the UI is able to stay responsive and keep the user informed while the data request is being fulfilled. This asynchronous on-demand loading also reduces application response time because requests can be tuned to return only the data and other content that needs to change.
  • They manage data. When applicable, modern web applications provide client-side data caching and prefetching to boost client-side performance. This enables the UI to immediately respond to user input gestures because it does not have to make a call to the server for data. Data caching also serves to minimize the impact on server resources, increasing application scalability because fewer calls to the server are required.

Tuesday, October 18, 2011

Top 10 New Features in SQL Server Denali


ShareThis

I will be writing on SQL Server 12 or Code name Denali in the coming few posts.

The next release of Microsoft SQL Server 12 or code-named Denali, is the buzz now a days. Microsoft has just released Denali CTP3, and the final release is expected by the end of the year. Denali continues SQL Server's efforts into the enterprise with a number of important features. Here are the top 10 most significant new features in the SQL Server Denali release.

10. SQL Server Developer Tools—One of the most obvious improvements in SQL Server Denali is the new development environment, SQL Server Developer Tools, coded-named Juneau. Juneau uses the Windows Presentation Foundation (WPF)–based Visual Studio 2010 shell, and it unifies development for Business Intelligence Development Studio (BIDS) and Visual Studio. One goal for Juneau is to make the development environment consistent for both SQL Azure and the on-premises version of SQL Server.

9. Contained databases—Contained databases make it easy to move databases between different instances of SQL Server. With Denali, login credentials are included with contained databases. Users don't need logins for the SQL Server instance because all authentications are handled by the contained database. Contained databases have no configuration dependencies on the instance of SQL Server that they're hosted on and can be moved between on-premises SQL Server instances and SQL Azure.

8. Project "Crescent"—The new data visualization tool, code-named Project "Crescent," is Closely integrated with SharePoint 2010 and Silverlight. Microsoft has called the Crescent feature "PowerPoint for your data." Crescent makes it easy for users to create great-looking data pages and dashboards by using data models that are built using PowerPivot or from tabular data from SQL Server Analysis Services.

7. Data Quality Services—Valid data is critical for making effective business intelligence (BI) decisions. Data Quality Services lets you set up a knowledge base that defines your metadata rules. You can then run Data Quality Services projects to apply those rules to data stored in a SQL Server data source. The Data Quality Services projects cleanse the data and allow viewing of good, invalid, and corrected rows.

6. User-defined server roles—An important security-related feature in Denali is the addition of user-defined severs roles. Earlier releases had fixed server roles that were predefined by Microsoft. These roles covered most situations, but they weren't as flexible or granular as some organizations wanted. The new user-defined server roles give organizations more control and customization ability over SQL Server's server roles.

5. Change data capture (CDC) for Oracle—
CDC lets you keep large tables in sync by initially moving a snapshot to a target server, then moving just the captured changes between the databases. With the SQL Server 2008 release, CDC was limited to SQL Server, but many organizations also have other database platforms they want to use CDC with. A big improvement in the Denali release is the addition of CDC for Oracle.

4. T-SQL enhancements—Two of the most important T-SQL enhancements in Denali are the addition of the Sequence object and the window functions. Unlike the similar Identity column, Sequence lets you tie unique row identifiers across multiple tables. The new window functions apply to sets of rows using the new OVER clause. You can read more about window functions in "Window Functions (OVER Clause)—Help Make a Difference."

3. Columnar store index—The columnar store index or, as it is sometimes called, the column-based query accelerator, uses the same high performance/high compression technology that Microsoft uses in PowerPivot, and it brings that technology into the database engine. Indexed data is stored according to the data of each column rather than by the rows, and only necessary columns are returned as query results for columnar indexes. Microsoft states this technology can provide up to 100 times improvement in query performance in some cases.

2. Support for Windows Server Core—The ability to run SQL Server on Windows Server Core has been missing from previous releases of SQL Server. Server Core is designed for infrastructure applications such as SQL Server that provide back-end services but don't really need a GUI on the same server. Denali's support for Server Core enables leaner and more efficient SQL Server installations and at the same time reduces potential attack vectors and the need for patching.

1. AlwaysOn—Without a doubt, the most important new feature in SQL Server Denali is the new SQL Server AlwaysOn feature. AlwaysOn is essentially the next evolution of database mirroring. AlwaysOn supports up to four replicas, the data in the replicas can be queried, and backups can be performed from the replicas. Although it's still early, AlwaysOn seems more complicated to set up than database mirroring because it requires Windows Failover Clustering, but the advantages appear to make it well worth the extra effort.

Sunday, October 9, 2011

Log Shipping vs. Mirroring vs. Snapshot vs. Replication in Databases


ShareThis

Recently, I have been taking more interests and started learning in Databases. I will keep sharing my deep dive learning outcomes in my blog. Keep following..

Log Shipping

Log Shipping is an old technique available since SQL SERVER 2000. Here the transactional log (ldf) is transferred periodically to the standby server. If the active server goes down, the stand by server can be brought up by restoring all shipped logs.

Usage Scenario: You can cope up with a longer down tim

 

e. You have limited investments in terms of shared storage, switches, etc.

Log shipping is based on SQL Server Agent jobs that periodically take log backups of the primary database, copy the backup files to one or more secondary server instances, and restore the backups into the secondary database(s). 

Log shipping supports an unlimited number of secondaries for each primary database.

Database mirroring is preferable to log shipping in most cases, although log shipping does have the following advantages:

1. it provides backup files as part of the process
2. multiple secondaries are supported
3. it is possible to introduce a fixed delay when applying logs to allow the secondary to be used for recovering from user error

Database Mirroring

Database mirroring is functionality in the SQL Server engine that reads from the transaction log and copies transactions from the principal server instance to the mirror server instance.

Database mirroring can operate synchronously or asynchronously

If configured to operate synchronously, the transaction on the principal will not be committed until it is hardened to disk on the mirror.

Database mirroring also supports automatic failover if the principal database becomes unavailable.

The mirror database is always offline in a recovering state, but you can create snapshots of the mirror database to provide read access for reporting

which was introduced with 2005 edition, works on top of Log Shipping. Main difference is the uptime for the standby server is quite less in mirroring. Standby server automatically becomes active in this case (through help of a broker server which is called as Witness in SQL SERVER parlance), without having to restore logs (actually logs are continuously merged in this scenario – no wonder it’s called Mirror clip_image001 ).

Snapshot

Snapshot is a static read only picture of database at a given point of time. Snapshot is implemented by copying a Page (8KB for SQL SERVER) at a time. For e.g. assume you have a table in your DB, & you want to take a snapshot of it. You specify the physical coordinates for storing snapshot & when ever original table changes the affected rows are pushed first to the the snapshot & then changes happen to the DB.

Usage Scenario: You have a separate DB for report generation, and want to ensure that latest data for that is available. You can periodically take snapshot of your transactional database.

Replication

Replication is used mainly when data centers are distributed geographically. It is used to replicate data from local servers to the main server in the central data center. Important thing to note here is, there are no standby servers. The publisher & subscriber both are active.

Usage Scenario: A typical scenario involves syncing local / regional lookup servers for better performance with the main server in data center periodically, or sync with a remote site for disaster recovery.

Failover Clustering

Failover Clustering  is a high availability option only (unlike others above which can be used for disaster recovery as well) used with clustering technology provided by hardware + OS. Here the data / databases don’t belong to either of servers, and in fact reside on shared external storage like SAN. Advantages of a SAN storage is large efficient hot pluggable disk storage. You might see DR options like Mirroring used quite frequently with failover clustering. Here’s a good article on adding geo redundancy to a failover cluster setup.

Few links for further Reference

http://blogs.msdn.com/b/mikewat/archive/2007/07/28/database-mirroring-and-log-shipping-which-is-better.aspx

http://stackoverflow.com/questions/525637/what-are-the-scenarios-for-using-mirroring-log-shipping-replication-and-cluster

http://sqldbpool.com/2010/02/15/database-mirroring-vs-log-shipping/

http://msdn.microsoft.com/en-us/library/ms187016.aspx

http://social.msdn.microsoft.com/Forums/en-US/sqldatabasemirroring/thread/ee05954e-0934-4305-8936-b9226e231d06/