The procedure in which the operating system terminates an application’s process when the user navigates away from the application. The operating system maintains state information about the application. If the user navigates back to the application, the operating system restarts the application process and passes the state data back to the application.
Android is moving into the mid-range market with devices such as the Vodafone 845that have cheaper, less powerful hardware.
Now that Microsoft has released Windows Phone 7, Windows Mobile will disappear.
MeeGo was not available at the time of this writing. It is likely to hit the market in the first quarter of 2011.
Smartphone market overview
MARKET
SHARE
OSS
CONSUMERS
High-end
20%
iOS Android webOS MeeGo Windows Phone 7 BlackBerry OS6
Within the high-end group, users care about web surfing and applications above anything else, and they’re willing to pay for these features.
Business
35%
BlackBerry Symbian Windows Mobile Windows Phone 7
The business group includes phones that companies buy for their employees. The IT department decides which OS can access the company network so that users can retrieve e-mail and browse secure intranets.
Mid-range
45%
Android Symbian BlackBerry bada Windows Mobile
Within the mid-range category, users are interested in music, a good camera, and/or easy texting (which requires a hardware keyboard)—all in an affordable device.
Global browser stats for November, 2010
SHARE
BROWSER
NOTES
22%
Opera
StatCounter lumps Opera Mini and Opera Mobile together. My personal estimate, based on discussion with Opera, is that about 90% of this number is Mini.
22%
Safari
StatCounter splits up iOS into iPhone, iPod Touch, and iPad. It includes iPad stats with the Safari desktop—not in the mobile statistics. Therefore, this figure excludes the iPad.
19%
BlackBerry
This encompasses mostly the OS5 and older models, which run a browser with a homegrown rendering engine. From OS6 on, BlackBerry uses a WebKit-based browser, and that will make our job a lot easier.
17%
Nokia
Nokia’s WebKit-based browser comes in various flavors, some of which are better than others. Unfortunately StatCounter does not differentiate between each flavor.
11%
Android
The Android market is pretty fragmented when it comes to browsers. There are some subtle differences between browsers on HTC and Sony Ericsson devices. Expect problems to arise from these inconsistencies.
4%
NetFront
NetFront runs mostly on older phones from Asian vendors, notably Sony Ericsson. This figure includes the Sony PlayStation Portable as well as other gaming devices.
1%
UCWeb
The most popular browser in China. It offers little functionality.
1%
Samsung
StatCounter lumps all Samsung browsers together, from old NetFront-based phones to the new WebKit-based bada.
The best mobile browsers
Safari for iOS—the best mobile browser overall,
Android WebKit,
Dolfin for Samsung bada—by far the fastest mobile browser, and
BlackBerry WebKit, the new default browser for OS6 and higher. (Currently only available on the BlackBerry Torch.)
Here are few trick tips and tricks that will help you to secure your password:
HOW TO SET SECURE PASSWORDS
For a password to be secure, it needs to be difficult to guess, as long as possible and consist of a combination of letters, numbers and characters. It also needs to be unique for each service that you use. The trouble is that the longer and more difficult to crack a password becomes, the harder it becomes to remember, which is why many people use the same password everywhere. The good news is, there are a few strategies that you can use to set secure and unique, yet memorable, passwords:
Use a password manager. This is probably the easiest and most secure option, and so it’s the one I recommend. There are several excellent tools available, such asLastPass, 1Password and KeePass, that can generate and store extremely tough to crack unique passwords for every service you use. Because the tool manages the passwords for you, you don’t need to worry about forgetting a tricky long password.
Use a password hashing tool. A password hashing tool will take your password, combine it with a parameter (perhaps based on the site’s name or domain) and combine the two using a hashing function to create a very tough to crack password. As the tool deals with the hashing for you, you only need to remember the master password. There are several free password hashers available as browser add-ons.
Use a rule-based password strategy. Gina Trapani posted a great rule-based password strategy on Lifehacker back in 2006 (if only all the Lifehacker readers had actually heeded her advice!). The idea is that you take a base password and combine it with the name of the service the you’re creating the password for using a set of rules. For example, my password for WebWorkerDaily might be %shjk80aily% (an easily memorable master password of shjk80, plus the final four letters from the service name, surrounded by % characters for extra security). Applying the same rules, my password for Amazon would then be %shjk80azon%. You can also reverse or reorder the letters from the service name, or interweave them with the letters from your master password, for even greater security.
I started coding in VB4 language and did a lot of coding on VB 6 for many years. Those days, VB was my native language. I was fond of it. I loved it. I used to think in VB
But when I learnt JAVA and then C#, I never VB again. C# is really amazing.
There are a couple of reasons I do not use VB -
VB is verbose
The intellisense refuses to "let go" unless I tap the Escape key
Syntax highlighting sucks. In C#, all types are highlighted - in VB, only intrinsic types are highlighted.
It's VB.
There are no automatic code formatting options like we get with C#
But still I know there are a couple of things that can be better done in VB.net.
In Structured Analysis, the focus is only on process and procedures. Modeling techniques used in it are DFD(Data Flow Diagram), Flowcharts etc. This approach is old and is not preferred
Whereas in Object Oriented Analysis, the focus is more on capturing the real world objects in the current scenario that are of importance to the system. [2] It stresses more on data structure and less on procedural structure. Without actually identifying objects, what are you going to interact with, and whose state will you change. In this approach, objects are identified, their relationships among each other, possible states that each object can be in, and finally how all objects collaborate with each other to achieve a broader system goal are identified. Modeling techniques used in it are UML(Unified modeling Language), that can present both structural and behavioural/procedural aspect of the system. UML includes Class Diagram, State Diagram, Use case diagram, Sequence Diagram, etc. Using this approach keeps your system more maintainable and reusable, and is a common choice nowadays
Amazon Elastic Compute Cloud (AmazonEC2) is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale computing easier for developers.
AmazonEC2’s simple web service interface allows you to obtain and configure capacity with minimal friction. It provides you with complete control of your computing resources and lets you run on Amazon’s proven computing environment. AmazonEC2reduces the time required to obtain and boot new server instances to minutes, allowing you to quickly scale capacity, both up and down, as your computing requirements change. AmazonEC2changes the economics of computing by allowing you to pay only for capacity that you actually use. AmazonEC2provides developers the tools to build failure resilient applications and isolate themselves from common failure scenarios.
Windows API Code Pack for Microsoft .NET Framework
This provides a source code library that can be used to access some features of Windows 7 and Windows Vista from managed code. These Windows features are not available to developers today in the .NET Framework. The features of the library includes:
Windows 7 Taskbar
Jump Lists, Icon Overlay, Progress Bar, Tabbed Thumbnails, and Thumbnail Toolbars
Windows Shell
Windows 7 Libraries
Windows Shell Search API support
Explorer Browser Control
A hierarchy of Shell Namespace entities
Windows Shell property system
Drag and Drop for Shell Objects
Windows Vista and Windows 7 Common File Dialogs, including custom controls
Command Link control and System defined Shell icons
Visual Studio 2010 compliance
Initial xUnit test coverage
String localization
Signed assemblies
Requirements:
Minimum .NET Framework version required to use this library is 3.5 SP1.
The APIs for Shell Extensions require .NET 4.
This library targets the Windows 7, though many of the features will work on Windows Vista as well.
Building and using the Library:
To build the library (except the DirectX related features) in Visual Studio 2008, execute 'Windows API Code Pack Self Extractor.exe' and extract the contents of the ‘Windows API Code Pack 1.1.zip’ file. Build the included ‘WindowsAPICodePack.sln’ file located in the 'WindowsAPICodePack' directory (within the 'source' directory).
To build the DirectX features, build the 'DirectX.sln' file inside the DirectX directory. Additional information on using the DirectX features of the Code Pack can be found in the 'DirectXCodePack_Requirements.htm' document available as a separate download.
Eric Lippert of Microsoft has written en excellent blog post in which he unveil many doubted truths and let us know many of the facts of C# language and its CLR/ C LI implementation. Every developer must read it.
This post clears one of the common miss-conception of developers: Value types are always allocated on stack.
It is usually stated incorrectly: the statement should be "value types can be stored on the stack", instead of the more common "value types are always stored on the stack".
It is almost always irrelevant. We've worked hard to make a managed environment where the distinctions between different kinds of storage are hidden from the user. Unlike some languages, in which you must know whether a particular storage is on the stack or the heap for correctness reasons.
It is incomplete. What about references? References are neither value types nor instances of reference types, but they are values. They've got to be stored somewhere. Do they go on the stack or the heap? Why does no one ever talk about them? Just because they don't have a type in the C# type system is no reason to ignore them.
The way in the past I've usually pushed back on this myth is to say that the real statement should be "in the Microsoft implementation of C# on the desktop CLR, value types are stored on the stack when the value is a local variable or temporary that is not a closed-over local variable of a lambda or anonymous method, and the method body is not an iterator block, and the jitter chooses to not enregister the value."
The sheer number of weasel words in there is astounding, but they're all necessary:
Versions of C# provided by other vendors may choose other allocation strategies for their temporary variables; there is no language requirement that a data structure called "the stack" be used to store locals of value type.
We have many versions of the CLI that run on embedded systems, in web browsers, and so on. Some may run on exotic hardware. I have no idea what the memory allocation strategies of those versions of the CLI are. The hardware might not even have the concept of "the stack" for all I know. Or there could be multiple stacks per thread. Or everything could go on the heap.
Lambdas and anonymous methods hoist local variables to become heap-allocated fields; those are not on the stack anymore.
Iterator blocks in today's implementation of C# on the desktop CLR also hoist locals to become heap-allocated fields. They do not have to! We could have chosen to implement iterator blocks as coroutines running on a fiber with a dedicated stack. In that case, the locals of value type could go on the stack of the fiber.
People always seem to forget that there is more to memory management than "the stack" and "the heap". Registers are neither on the stack or the heap, and it is perfectly legal for a value type to go in a register if there is one of the right size. If if is important to know when something goes on the stack, then why isn't it important to know when it goes in a register? Conversely, if the register scheduling algorithm of the jit compiler is unimportant for most users to understand, then why isn't the stack allocation strategy also unimportant?
Having made these points many times in the last few years, I've realized that the fundamental problem is inthe mistaken belief that the type system has anything whatsoever to do with the storage allocation strategy. It is simply false that the choice of whether to use the stack or the heap has anything fundamentally to do with the type of the thing being stored. The truth is: the choice of allocation mechanism has to do only with the knownrequiredlifetime of the storage.
Once you look at it that way then everything suddenly starts making much more sense. Let's break it down into some simple declarative sentences.
There are three kinds of values: (1) instances of value types, (2) instances of reference types, and (3) references. (Code in C# cannot manipulate instances of reference types directly; it always does so via a reference. In unsafe code, pointer types are treated like value types for the purposes of determining the storage requirements of their values.)
There exist "storage locations" which can store values.
Every value manipulated by a program is stored in some storage location.
Every reference (except the null reference) refers to a storage location.
Every storage location has a "lifetime". That is, a period of time in which the storage location's contents are valid.
The time between a start of execution of a particular method and the method returning normally or throwing an exception is the "activation period" of that method execution.
Code in a method can require the use of a storage location. If the required lifetime of the storage location is longer than the activation period of the current method execution then the storage is said to be "long lived". Otherwise it is "short lived". (Note that when method M calls method N, the use of the storage locations for the parameters passed to N and the value returned by N is required by M.)
Now we come to implementation details. In the Microsoft implementation of C# on the CLR:
There are three kinds of storage locations: stack locations, heap locations, and registers.
Long-lived storage locations are always heap locations.
Short-lived storage locations are always stack locations or registers.
There are some situations in which it is difficult for the compiler or runtime to determine whether a particular storage location is short-lived or long-lived. In those cases, the prudent decision is to treat them as long-lived. In particular, the storage locations of instances of reference types are always treated as though they are long-lived, even if they are provably short-lived. Therefore they always go on the heap.
And now things follow very naturally:
We see that references and instances of value types are essentially the same thing as far as their storage is concerned; they go on either the stack, in registers, or the heap depending on whether the storage of the value needs to be short-lived or long-lived.
It is frequently the case that array elements, fields of reference types, locals in an iterator block and closed-over locals of a lambda or anonymous method must live longer than the activation period of the method that first required the use of their storage. And even in the rare cases where their lifetimes are shorter than that of the activation of the method, it is difficult or impossible to write a compiler that knows that. Therefore we must be conservative: all of these storage locations go on the heap.
It is frequently the case that local variables and temporary values can be shown via compile-time analysis to be unused after the activation period ends, and therefore can be treated short-lived, and therefore can go onto the stack or put into registers.
Once you abandon entirely the crazy idea that the type of a value has anything whatsoever to do with thestorage, it becomes much easier to reason about it. Of course, my point above stands: you don't need to reason about it unless you are writing unsafe code or doing some sort of heavy interoperating with unmanaged code. Let the compiler and the runtime manage the lifetime of your storage locations; that's what its good at.
Eric Lippert is a senior software design engineer at Microsoft. He has been working full time in the developer division since 1996, where he has assisted with the design and implementation of VBScript, JScript, JScript .NET, Windows Script Host, Visual Studio Tools for Office and C#.
The idea behind jQuery is to simplify the task of getting a selected subset of DOM elements to work with. In other words, the jQuery library is mostly intended to run queries over the page DOM and execute operations over returned items. But the query engine behind the library goes far beyond the simple search capabilities of, say, document.getElementById (and related functions) that you find natively in the DOM. The query capabilities of jQuery use the powerful CSS syntax which gives you a surprising level of expressivity. For example, you can select all elements that share a given CSS class, have a given combination of attribute values, appear in a fixed relative position in the DOM tree, and are in particular relationship with other elements. More importantly, you can add filter conditions and chain all queries together to be applied sequentially.
The root of the jQuery library is the function defined as follows:
var jQuery = window.jQuery = window.$ = function( selector, context ) { return new jQuery.fn.init( selector, context ); };
Nearly any jQuery script is characterized by one or more calls to the $ function -- an alias for the root jQuery function. Any line of jQuery code is essentially a query with some optional action applied to the results.
When you specify a query, you call the root function and pass it a selector plus an optional context. The selector indicates the query expression; the context indicates the portion of the DOM where to run the query. If no context is specified, the jQuery function looks for DOM elements within the entire page DOM. The jQuery root object performs some work on the provided arguments, runs the query, and then returns a new jQuery object that contains the results. The newly created jQuery object can, in turn, be further queried, or filtered, in a new statement as well as in chain of statements; for example:
$("div.Tooltip")
The call selects all DIV tags with a CSS class attribute of Tooltip. Written that way, however, the code has no significant effect. The$ function selects one or more DOM elements and that's all of it. It just returns a new jQuery object that contains the DOM elements. The resulting set is known as the "wrapped set". You can grab the size of this set by calling the size method, as shown below:
alert($("div.Tooltip").size());
Any function you invoke on the wrapped set is called for each element in the set. For example, consider the following code:
$("div.Tooltip:hidden").fadeIn(500);
The query selects all DIV elements currently hidden where the class attribute equals Tooltip. Each of these hidden DIV elements is then displayed using a fade-in algorithm that takes half a second to complete.
You can also loop over each element in the wrapped set using the each function:
The each function gets a JavaScript callback function and plays that function for each element. The difference between the function each and a manual JavaScript loop lies in the fact that the function each automatically maps the this object (as in the snippet) to the element in the collection being processed. The callback function, however, also receives an integer parameter being the 0-based index of the iteration. If you are interested in using this piece of information, you just add a parameter to the definition of the callback passed to each.
As you can see, most of the time by simply calling the function directly on the wrapped set you obtain the same effect as writing the loop yourself. The each function is reserved for special situations where you need to employ some application-specific logic to determine the action to take.
An extract from this article from Dr. Dobbs Journal.
The .NET Micro Framework is .NET for small and resource constrained devices. It offers a complete and innovative development and execution environment that brings the productivity of modern computing tools to this class of devices.
.NET Micro Framework is an open source platform that expands the power and versatility of .NET to the world of small embedded applications. Desktop programmers can harness their existing .NET knowledge base to bring complex embedded concepts to market on time (and under budget). Embedded Developers can tap into the massive productivity gains that have been seen on the Desktop.
1.5 million devices are currently running on the .NET Micro Framework.
.NET Micro Framework Devices
The typical .NET Micro-Framework device has a 32 bit processor with or without a memory management unit (MMU) and could have as little as 64K of random-access memory (RAM). The .NET Micro Framework supports rich user experience and deep connectivity with other devices.
Such devices include: consumer devices, consumer medical, home automation, industrial automation, automotive, sideshow devices / PC peripherals.
The Microsoft Web Farm Framework is a free product we are shipping that enables you to easily provision and mange a farm of web servers. It enables you to automate the installation and configuration of platform components across the server farm, and enables you to automatically synchronize and deploy ASP.NET applications across them.
The Microsoft Web Farm Framework enables you to easily define a “Server Farm” that you can add any number of servers into. Servers participating in the “Server Farm” will then be automatically updated, provisioned and managed by the Web Farm Framework.
What this means is that you can install IIS (including modules like UrlRewrite, Media Services, etc), ASP.NET, and custom SSL certificates once on a primary server – and then the Web Farm Framework will automatically replicate and provision the exact same configuration across all of the other web servers in the farm (no manual or additional steps required).
You can then create and configure an IIS Application Pool and a new Site and Application once on a primary server – and the Web Farm Framework will automatically replicate and provision the settings to all of the other web servers in the farm. You can then copy/deploy an ASP.NET application once on the primary server – and the Web Farm Framework will automatically replicate and provision the changes to all of the web servers in the farm (no manual or additional steps required).
The Web Farm Framework eliminates the need to manually install/manage things across a cluster of machines. It handles all of the provisioning and deployment for you in a completely automated way.
Here are five tips to help you drive more traffic to your site using SMO techniques:
1. Increase the number of links to your site. A website's popularity rating continues to be influenced by the number of links it gains from other sites. Network to increase that number—and boost your perceived popularity.
2. Apply bookmarking and tagging. Incorporate buttons for users like "add to delicious" (or other bookmarking sites) to encourage bookmarking. You can also include relevant tags to your site's pages at bookmarking sites across the Web.
3. Create inbound links. Create new links on your site to the blogs or sites that contain back links to your site. The back-and-forth action should increase its visibility.
4. Make your content travel. Include "portable" content at your site such as PDFs, video, and audio files. Send links to them to your list, and offer them to sites related to your niche.
5. Allow others to use your content. Use RSS feeds to syndicate your content, so that others can use it for their benefit—and drive more traffic to your site in the process.
The Po!nt: Add more social to your SEO. As social sites proliferate, your site could gain a boost in popularity by linking up and networking with fans in communities across the Internet. Apply a few SMO techniques to your site SEO and see what happens!
.NET 4 ships with a much improved version of Entity Framework (EF) – a data access library that lives in the System.Data.Entity namespace.
When Entity Framework was first introduced with .NET 3.5 SP1, developers provided a lot of feedback on things they thought were incomplete with that first release. The SQL team did a good job of listening to this feedback, and really focused the EF that ships with .NET 4 on addressing it.
Some of the big improvements in EF4 include:
POCO Support: You can now define entities without requiring base classes or data persistence attributes.
Lazy Loading Support: You can now load sub-objects of a model on demand instead of loading them up front.
N-Tier Support and Self-Tracking Entities: Handle scenarios where entities flow across tiers or stateless web calls.
Better SQL Generation and SPROC support: EF4 executes better SQL, and includes better integration with SPROCs
Automatic Pluralization Support: EF4 includes automatic pluralization support of tables (e.g. Categories->Category).
Improved Testability: EF4’s object context can now be more easily faked using interfaces.
Improved LINQ Operator Support: EF4 now offers full support for LINQ operators.
Visual Studio 2010 also includes much richer EF designer and tooling support. The EF designer in VS 2010 supports both a “database first” development style – where you construct your model layer on a design surface from an existing database. It also supports a “model first” development style – where you first define your model layer using the design surface, and can then use it to generate database schema from it.
Code-First Development with EF
In addition to supporting a designer-based development workflow, EF4 also enables a more code-centric option which we call “code first development”. Code-First Development enables a pretty sweet development workflow. It enables you to:
Develop without ever having to open a designer or define an XML mapping file
Define your model objects by simply writing “plain old classes” with no base classes required
Use a “convention over configuration” approach that enables database persistence without explicitly configuring anything
Optionally override the convention-based persistence and use a fluent code API to fully customize the persistence mapping
EF’s “code first development” support is currently enabled with a separate download that runs on top of the core EF built-into .NET 4. CTP4 of this “code-first” library shipped this week and can be downloaded here.
It works with VS 2010, and you can use it with any .NET 4 project (including both ASP.NET Web Forms and ASP.NET MVC).
LOB (Line of Business) applications are enterprise level business application software which are critical to running complex processes. A LOB application has generally simple UI but has big and complex OLTP processing going on.
- An LOB (line-of-business) application is one of the set of critical computer applications that are vital to running an enterprise, such as accounting, supply chain management, and resource planning applications. LOB applications are usually large programs that contain a number of integrated capabilities and tie into databases and database management systems.
It’s an important architectural pattern to keep the size of your initial ZAP file minimal to provide a fast initial loading experience to the user. To make this happen Silverlight provides facilities so that you can load assemblies as and when required.
This should be the Architectural Design of your app:
Make your application modular and divide it in smaller assemblies.
Load required assemblies when required
Tip: You can also load built in assemblies at run time on demand. For example toy can load Dot Net assembly
I have written a walkthrough tutorial. You need Visual Studio 2010 express (or higher) and Silverlight 4.0 installed.
The example app loads an assembly called SilverlightLibrary.dll when the user clicks a text block. This example uses a relative URI to load the assembly from the same location as the application XAP file. It can be loaded from other locations as well.
This example uses the WebClient class to initiate an asynchronous download of the assembly in response to a user mouse click. When the download is complete, the AssemblyPart class is used to load the assembly.
1. Setup the project
Create a New Silverlight application and Name it : LoadingAssemblyOnDemand
2. Click ok when the following prompt pops up:
3. Now you have two projects in your solution, one is Silverlight App and another is a webApp to host this Silverlight App as seen in the Solution explorer below
4. Add the following elements into your MainPage.xaml. This will build up our basic GUI. Clicking on the Text block will initiate the process of dynamic loading on demand.
<StackPanel
x:Name="stackPanel">
<TextBlock>Page from Host
Application</TextBlock>
<TextBlock
MouseLeftButtonUp="TextBlock_MouseLeftButtonUp"
Cursor="Hand">
Click Here to Display a UI from the Library
Assembly
</TextBlock>
</StackPanel>
As seen in the screen shot below:
5. This will create a simple UI which you can see in the design view.
6. Now open the code window (MainPage.xaml.cs) and add the following code to the class.
privatevoid TextBlock_MouseLeftButtonUp(
object sender, MouseButtonEventArgs e)
{
// Download an "on-demand" assembly.
WebClient wc = new WebClient();
wc.OpenReadCompleted += new OpenReadCompletedEventHandler(wc_OpenReadCompleted);
wc.OpenReadAsync( new Uri("SilverlightLibrary.dll", UriKind.Relative));
// Convert the downloaded stream into an assembly that is
// loaded into current AppDomain.
AssemblyPart assemblyPart = new AssemblyPart();
assemblyPart.Load(e.Result);
DisplayPageFromLibraryAssembly();
}
}
privatevoid DisplayPageFromLibraryAssembly()
{
// Create an instance of the Page class in the library assembly
SilverlightLibrary.SilverlightPage page = new SilverlightLibrary.SilverlightPage();
page.ShowMessage("Welcome");
}
7. Add a new project of type Silverlight Class Library to the Solution. Name this Project as SilverlightLibrary Now the solution has three projects as shown below:
8. Add a class to this newly added class library project. name this Class as SilverlightPage.cs, as shown above.
9. Add a simple method to this class named – ShowMessage and the code as shown below-
publicvoid ShowMessage(string messsage)
{
MessageBox.Show(messsage);
}
So when the user will click on the text block in the Silverlight app, this assembly and this class (SilverlightPage) will be dynamically loaded and this method can be called on the class object.
10. Now build this Class Lib project as shown below: (Right Click on the Project Node and Click on Build)
11. Now you have to add reference of this class lib into your Silverlight app. (For this expand the Silverlight App Project Node, right Click on References Node and Click Add Reference…)
12. Add reference dialog box opens. Click on the Project Tab and select the SilverlightLibrary (Class lib), click Ok) as shown below-
13. You will see a Reference of the Class lib added to the References Node Of the project as shown below-
14. Right Click on the SilverlightLibrary Node of the references as shown below and Click on Properties
15. Properties dialog box opens. Set the Copy Local property to False, as shown below and then save the project and solution.
16. Now build the Silverlight App.
17. We also need to copy the compiled DLL file of the class lib to the Client Bin Folder of the web project (On-Demand-Assembly-Loading.Web). So navigate the Solution Folder and then web Project folder and the Bin folder> For me the path was like this: C:\Users\SUMIT\Documents\Visual Studio 2010\Projects\On-Demand-Assembly-Loading\SilverlightLibrary\Bin\Debug
Copy two files from here (SilverlightLibrary.dll and SilverlightLibrary.pdb) and paste it to the Client Bin Folder of the Web Project. My Web project looks something like this after doing this-
18. Now we are done, So run the Silverlight Application. this will open the Default test page in the browser. We need to test whether the SilverlightLibrary assembly is being loaded dynamically at runtime. Firebug in Firefox can help us in this. Open FF and Open the Firebug pane. Enable Net tab in firebug to see the network traffic. Load the test page in Firefox and examine the traffic.
See after the first loading of the page, the XAP file loads that takes 4.3 KB of traffic. The SilverlightLibrary assembly has not been loaded till now. There are total 5 http requests making a total of 15.3KB. See below-
19. Now Click on the TextBlock to fire up the dynamic loading process.
A welcome message box pops up this means the dynamic code is executed. See, there are now six http request and Total is now 19.3KB.
20. When you expand the last (sixth request) you will see the name of the SilverlightLibrary.dll assembly. This was loaded dynamically when user clicked on the text block.
So this walkthrough helps us to understand the complete process of making our Silverlight Application modular and help us load the assemblies dynamically at runtime from the server.
Pls. put your comments or any questions. i shall love to answer you.